Test Report: none_Linux 19734

                    
                      795b96072c2ea51545c2bdfc984dcdf8fe273799:2024-09-30:36435
                    
                

Test fail (1/167)

Order failed test Duration
33 TestAddons/parallel/Registry 71.81
x
+
TestAddons/parallel/Registry (71.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.999708ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-49nkl" [f279ea6c-0d65-4d94-9dc1-43ba6d130381] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002989932s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-4lsgw" [3bd51464-305d-4990-aed6-cb08ea16c1b9] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003305003s
addons_test.go:338: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.083264028s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/09/30 10:32:46 [DEBUG] GET http://10.138.0.48:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:43761               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:21 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 30 Sep 24 10:21 UTC | 30 Sep 24 10:21 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 30 Sep 24 10:21 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 30 Sep 24 10:21 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 30 Sep 24 10:21 UTC | 30 Sep 24 10:22 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 30 Sep 24 10:23 UTC | 30 Sep 24 10:23 UTC |
	|         | volcano --alsologtostderr -v=1       |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 30 Sep 24 10:32 UTC | 30 Sep 24 10:32 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 30 Sep 24 10:32 UTC | 30 Sep 24 10:32 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 10:21:13
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 10:21:13.352720   14152 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:21:13.352887   14152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:21:13.352898   14152 out.go:358] Setting ErrFile to fd 2...
	I0930 10:21:13.352906   14152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:21:13.353082   14152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3681/.minikube/bin
	I0930 10:21:13.353641   14152 out.go:352] Setting JSON to false
	I0930 10:21:13.354552   14152 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":221,"bootTime":1727691452,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 10:21:13.354644   14152 start.go:139] virtualization: kvm guest
	I0930 10:21:13.356931   14152 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0930 10:21:13.358246   14152 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19734-3681/.minikube/cache/preloaded-tarball: no such file or directory
	I0930 10:21:13.358277   14152 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 10:21:13.358283   14152 notify.go:220] Checking for updates...
	I0930 10:21:13.359615   14152 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:21:13.360997   14152 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-3681/kubeconfig
	I0930 10:21:13.362431   14152 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3681/.minikube
	I0930 10:21:13.363779   14152 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 10:21:13.365145   14152 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 10:21:13.366672   14152 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 10:21:13.376268   14152 out.go:177] * Using the none driver based on user configuration
	I0930 10:21:13.377509   14152 start.go:297] selected driver: none
	I0930 10:21:13.377525   14152 start.go:901] validating driver "none" against <nil>
	I0930 10:21:13.377539   14152 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 10:21:13.377573   14152 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0930 10:21:13.378007   14152 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0930 10:21:13.378890   14152 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 10:21:13.379263   14152 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 10:21:13.379318   14152 cni.go:84] Creating CNI manager for ""
	I0930 10:21:13.379382   14152 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 10:21:13.379396   14152 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 10:21:13.379473   14152 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:21:13.381537   14152 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0930 10:21:13.383470   14152 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/config.json ...
	I0930 10:21:13.383504   14152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/config.json: {Name:mk1b7757fcffe1c2ef054e98e7fbd4d6b65c08e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:13.383654   14152 start.go:360] acquireMachinesLock for minikube: {Name:mk950621b2cf18d4d46c3c8617fe9495b86929a4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0930 10:21:13.383695   14152 start.go:364] duration metric: took 24.204µs to acquireMachinesLock for "minikube"
	I0930 10:21:13.383714   14152 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 10:21:13.383816   14152 start.go:125] createHost starting for "" (driver="none")
	I0930 10:21:13.385389   14152 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0930 10:21:13.386684   14152 exec_runner.go:51] Run: systemctl --version
	I0930 10:21:13.389306   14152 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0930 10:21:13.389348   14152 client.go:168] LocalClient.Create starting
	I0930 10:21:13.389409   14152 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19734-3681/.minikube/certs/ca.pem
	I0930 10:21:13.389441   14152 main.go:141] libmachine: Decoding PEM data...
	I0930 10:21:13.389460   14152 main.go:141] libmachine: Parsing certificate...
	I0930 10:21:13.389509   14152 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19734-3681/.minikube/certs/cert.pem
	I0930 10:21:13.389536   14152 main.go:141] libmachine: Decoding PEM data...
	I0930 10:21:13.389552   14152 main.go:141] libmachine: Parsing certificate...
	I0930 10:21:13.389994   14152 client.go:171] duration metric: took 636.505µs to LocalClient.Create
	I0930 10:21:13.390025   14152 start.go:167] duration metric: took 722.263µs to libmachine.API.Create "minikube"
	I0930 10:21:13.390034   14152 start.go:293] postStartSetup for "minikube" (driver="none")
	I0930 10:21:13.390084   14152 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 10:21:13.390133   14152 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 10:21:13.398084   14152 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0930 10:21:13.398111   14152 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0930 10:21:13.398124   14152 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0930 10:21:13.400360   14152 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0930 10:21:13.401616   14152 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3681/.minikube/addons for local assets ...
	I0930 10:21:13.401669   14152 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-3681/.minikube/files for local assets ...
	I0930 10:21:13.401700   14152 start.go:296] duration metric: took 11.656311ms for postStartSetup
	I0930 10:21:13.402472   14152 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/config.json ...
	I0930 10:21:13.402635   14152 start.go:128] duration metric: took 18.808783ms to createHost
	I0930 10:21:13.402651   14152 start.go:83] releasing machines lock for "minikube", held for 18.942587ms
	I0930 10:21:13.403127   14152 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0930 10:21:13.403182   14152 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0930 10:21:13.406211   14152 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0930 10:21:13.406260   14152 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 10:21:13.416344   14152 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0930 10:21:13.416369   14152 start.go:495] detecting cgroup driver to use...
	I0930 10:21:13.416399   14152 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0930 10:21:13.416527   14152 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 10:21:13.434414   14152 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0930 10:21:13.443091   14152 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0930 10:21:13.451716   14152 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0930 10:21:13.451761   14152 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0930 10:21:13.461639   14152 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0930 10:21:13.471151   14152 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0930 10:21:13.479700   14152 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0930 10:21:13.491098   14152 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 10:21:13.499759   14152 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0930 10:21:13.509132   14152 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0930 10:21:13.517257   14152 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0930 10:21:13.526366   14152 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 10:21:13.533302   14152 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 10:21:13.540218   14152 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0930 10:21:13.753304   14152 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0930 10:21:13.819458   14152 start.go:495] detecting cgroup driver to use...
	I0930 10:21:13.819510   14152 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0930 10:21:13.819658   14152 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 10:21:13.839719   14152 exec_runner.go:51] Run: which cri-dockerd
	I0930 10:21:13.840599   14152 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0930 10:21:13.848177   14152 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0930 10:21:13.848195   14152 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0930 10:21:13.848223   14152 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0930 10:21:13.855092   14152 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0930 10:21:13.855236   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube989885707 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0930 10:21:13.862442   14152 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0930 10:21:14.099473   14152 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0930 10:21:14.322661   14152 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0930 10:21:14.322789   14152 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0930 10:21:14.322804   14152 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0930 10:21:14.322862   14152 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0930 10:21:14.330805   14152 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0930 10:21:14.330950   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube628740387 /etc/docker/daemon.json
	I0930 10:21:14.338617   14152 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0930 10:21:14.593243   14152 exec_runner.go:51] Run: sudo systemctl restart docker
	I0930 10:21:14.885437   14152 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0930 10:21:14.896496   14152 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0930 10:21:14.911878   14152 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0930 10:21:14.922592   14152 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0930 10:21:15.148348   14152 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0930 10:21:15.408148   14152 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0930 10:21:15.646960   14152 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0930 10:21:15.661154   14152 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0930 10:21:15.671865   14152 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0930 10:21:15.915010   14152 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0930 10:21:15.982544   14152 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0930 10:21:15.982626   14152 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0930 10:21:15.984123   14152 start.go:563] Will wait 60s for crictl version
	I0930 10:21:15.984176   14152 exec_runner.go:51] Run: which crictl
	I0930 10:21:15.985190   14152 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0930 10:21:16.013562   14152 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0930 10:21:16.013621   14152 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0930 10:21:16.032568   14152 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0930 10:21:16.056632   14152 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0930 10:21:16.056707   14152 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0930 10:21:16.059279   14152 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0930 10:21:16.060514   14152 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 10:21:16.060638   14152 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 10:21:16.060651   14152 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.1 docker true true} ...
	I0930 10:21:16.060738   14152 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0930 10:21:16.060795   14152 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0930 10:21:16.107177   14152 cni.go:84] Creating CNI manager for ""
	I0930 10:21:16.107199   14152 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 10:21:16.107208   14152 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 10:21:16.107226   14152 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 10:21:16.107368   14152 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.138.0.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-2"
	  kubeletExtraArgs:
	    node-ip: 10.138.0.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 10:21:16.107425   14152 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 10:21:16.115833   14152 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0930 10:21:16.115882   14152 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0930 10:21:16.123361   14152 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0930 10:21:16.123362   14152 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0930 10:21:16.123361   14152 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0930 10:21:16.123416   14152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3681/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0930 10:21:16.123415   14152 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0930 10:21:16.123474   14152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3681/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0930 10:21:16.133968   14152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3681/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0930 10:21:16.174854   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube946911628 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0930 10:21:16.185818   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2871785167 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0930 10:21:16.195965   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1757915952 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0930 10:21:16.260992   14152 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 10:21:16.269241   14152 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0930 10:21:16.269260   14152 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0930 10:21:16.269295   14152 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0930 10:21:16.276954   14152 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0930 10:21:16.277084   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3370089736 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0930 10:21:16.284735   14152 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0930 10:21:16.284752   14152 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0930 10:21:16.284793   14152 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0930 10:21:16.292650   14152 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 10:21:16.292782   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube932379393 /lib/systemd/system/kubelet.service
	I0930 10:21:16.300150   14152 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0930 10:21:16.300255   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1779135712 /var/tmp/minikube/kubeadm.yaml.new
	I0930 10:21:16.307512   14152 exec_runner.go:51] Run: grep 10.138.0.48	control-plane.minikube.internal$ /etc/hosts
	I0930 10:21:16.308664   14152 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0930 10:21:16.529511   14152 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0930 10:21:16.543855   14152 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube for IP: 10.138.0.48
	I0930 10:21:16.543880   14152 certs.go:194] generating shared ca certs ...
	I0930 10:21:16.543896   14152 certs.go:226] acquiring lock for ca certs: {Name:mk0a5b9b1d30d3d8af9c11762592cf8e7817e041 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:16.544032   14152 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-3681/.minikube/ca.key
	I0930 10:21:16.544097   14152 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-3681/.minikube/proxy-client-ca.key
	I0930 10:21:16.544110   14152 certs.go:256] generating profile certs ...
	I0930 10:21:16.544172   14152 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/client.key
	I0930 10:21:16.544199   14152 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/client.crt with IP's: []
	I0930 10:21:16.617884   14152 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/client.crt ...
	I0930 10:21:16.617910   14152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/client.crt: {Name:mk0a31888a10e1b9b9d480a4e5d1e7e81c2faefa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:16.618049   14152 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/client.key ...
	I0930 10:21:16.618063   14152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/client.key: {Name:mk97492241b85cb3608b33f4ce925f417ccad8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:16.618151   14152 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/apiserver.key.35c0634a
	I0930 10:21:16.618165   14152 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
	I0930 10:21:16.858398   14152 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
	I0930 10:21:16.858426   14152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mk0da2466d77ccdd4ee35c6d92f955e6dd15b091 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:16.858562   14152 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/apiserver.key.35c0634a ...
	I0930 10:21:16.858574   14152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mkf4de6a10a92347f23430cdad86300f89676d1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:16.858642   14152 certs.go:381] copying /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/apiserver.crt
	I0930 10:21:16.858738   14152 certs.go:385] copying /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/apiserver.key
	I0930 10:21:16.858807   14152 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/proxy-client.key
	I0930 10:21:16.858827   14152 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0930 10:21:17.086340   14152 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/proxy-client.crt ...
	I0930 10:21:17.086369   14152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/proxy-client.crt: {Name:mk63d5b0cba483badba29f192c4be82a61b1805f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:17.086502   14152 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/proxy-client.key ...
	I0930 10:21:17.086516   14152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/proxy-client.key: {Name:mk937e2ef95b93b80300b9c639fcd810c9496f00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:17.086707   14152 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3681/.minikube/certs/ca-key.pem (1675 bytes)
	I0930 10:21:17.086748   14152 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3681/.minikube/certs/ca.pem (1082 bytes)
	I0930 10:21:17.086783   14152 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3681/.minikube/certs/cert.pem (1123 bytes)
	I0930 10:21:17.086863   14152 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-3681/.minikube/certs/key.pem (1675 bytes)
	I0930 10:21:17.087634   14152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3681/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 10:21:17.087770   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3483544710 /var/lib/minikube/certs/ca.crt
	I0930 10:21:17.097628   14152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3681/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0930 10:21:17.097729   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3692581465 /var/lib/minikube/certs/ca.key
	I0930 10:21:17.105978   14152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3681/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 10:21:17.106078   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2196338853 /var/lib/minikube/certs/proxy-client-ca.crt
	I0930 10:21:17.114093   14152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3681/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 10:21:17.114214   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3302222190 /var/lib/minikube/certs/proxy-client-ca.key
	I0930 10:21:17.121862   14152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0930 10:21:17.121987   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2346850202 /var/lib/minikube/certs/apiserver.crt
	I0930 10:21:17.130369   14152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 10:21:17.130511   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3331841777 /var/lib/minikube/certs/apiserver.key
	I0930 10:21:17.138654   14152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 10:21:17.138791   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2219864244 /var/lib/minikube/certs/proxy-client.crt
	I0930 10:21:17.146608   14152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3681/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 10:21:17.146747   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube393364853 /var/lib/minikube/certs/proxy-client.key
	I0930 10:21:17.154543   14152 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0930 10:21:17.154571   14152 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0930 10:21:17.154614   14152 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0930 10:21:17.161954   14152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-3681/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 10:21:17.162100   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1696048025 /usr/share/ca-certificates/minikubeCA.pem
	I0930 10:21:17.170641   14152 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 10:21:17.170760   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2485428456 /var/lib/minikube/kubeconfig
	I0930 10:21:17.178418   14152 exec_runner.go:51] Run: openssl version
	I0930 10:21:17.181107   14152 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 10:21:17.189685   14152 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 10:21:17.190975   14152 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 10:21:17.191017   14152 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 10:21:17.193720   14152 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 10:21:17.201377   14152 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 10:21:17.202374   14152 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 10:21:17.202411   14152 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:21:17.202514   14152 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0930 10:21:17.217374   14152 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 10:21:17.225664   14152 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 10:21:17.233622   14152 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0930 10:21:17.254075   14152 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 10:21:17.262418   14152 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 10:21:17.262442   14152 kubeadm.go:157] found existing configuration files:
	
	I0930 10:21:17.262491   14152 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 10:21:17.270222   14152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 10:21:17.270267   14152 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 10:21:17.277417   14152 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 10:21:17.287232   14152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 10:21:17.287299   14152 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 10:21:17.294522   14152 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 10:21:17.302445   14152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 10:21:17.302497   14152 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 10:21:17.309773   14152 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 10:21:17.317360   14152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 10:21:17.317415   14152 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 10:21:17.325814   14152 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0930 10:21:17.359027   14152 kubeadm.go:310] W0930 10:21:17.358912   15471 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 10:21:17.359583   14152 kubeadm.go:310] W0930 10:21:17.359477   15471 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 10:21:17.361136   14152 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0930 10:21:17.361157   14152 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 10:21:17.462693   14152 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 10:21:17.462804   14152 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 10:21:17.462814   14152 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 10:21:17.462821   14152 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0930 10:21:17.473661   14152 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 10:21:17.476450   14152 out.go:235]   - Generating certificates and keys ...
	I0930 10:21:17.476501   14152 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 10:21:17.476515   14152 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 10:21:17.763541   14152 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0930 10:21:17.876946   14152 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0930 10:21:17.961339   14152 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0930 10:21:18.036728   14152 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0930 10:21:18.260305   14152 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0930 10:21:18.260442   14152 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0930 10:21:18.395669   14152 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0930 10:21:18.395800   14152 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0930 10:21:18.522939   14152 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0930 10:21:18.640886   14152 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0930 10:21:18.747681   14152 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0930 10:21:18.747815   14152 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 10:21:18.857464   14152 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 10:21:18.987130   14152 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0930 10:21:19.299937   14152 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 10:21:19.654004   14152 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 10:21:19.894999   14152 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 10:21:19.895541   14152 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 10:21:19.897739   14152 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 10:21:19.899745   14152 out.go:235]   - Booting up control plane ...
	I0930 10:21:19.899765   14152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 10:21:19.899778   14152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 10:21:19.900209   14152 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 10:21:19.920460   14152 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 10:21:19.924585   14152 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 10:21:19.924608   14152 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 10:21:20.158781   14152 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0930 10:21:20.158802   14152 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0930 10:21:21.160320   14152 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001526429s
	I0930 10:21:21.160345   14152 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0930 10:21:25.162141   14152 kubeadm.go:310] [api-check] The API server is healthy after 4.001788604s
	I0930 10:21:25.173260   14152 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 10:21:25.182442   14152 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 10:21:25.198799   14152 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 10:21:25.198821   14152 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 10:21:25.205224   14152 kubeadm.go:310] [bootstrap-token] Using token: 8mbsnh.bfaqwwlbiiiw0kp5
	I0930 10:21:25.206512   14152 out.go:235]   - Configuring RBAC rules ...
	I0930 10:21:25.206542   14152 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 10:21:25.209309   14152 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 10:21:25.214862   14152 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 10:21:25.217020   14152 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 10:21:25.219223   14152 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 10:21:25.221368   14152 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 10:21:25.569040   14152 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 10:21:25.993580   14152 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 10:21:26.568312   14152 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 10:21:26.570053   14152 kubeadm.go:310] 
	I0930 10:21:26.570071   14152 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 10:21:26.570076   14152 kubeadm.go:310] 
	I0930 10:21:26.570080   14152 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 10:21:26.570084   14152 kubeadm.go:310] 
	I0930 10:21:26.570088   14152 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 10:21:26.570098   14152 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 10:21:26.570103   14152 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 10:21:26.570106   14152 kubeadm.go:310] 
	I0930 10:21:26.570110   14152 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 10:21:26.570114   14152 kubeadm.go:310] 
	I0930 10:21:26.570119   14152 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 10:21:26.570122   14152 kubeadm.go:310] 
	I0930 10:21:26.570126   14152 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 10:21:26.570130   14152 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 10:21:26.570134   14152 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 10:21:26.570138   14152 kubeadm.go:310] 
	I0930 10:21:26.570143   14152 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 10:21:26.570147   14152 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 10:21:26.570151   14152 kubeadm.go:310] 
	I0930 10:21:26.570155   14152 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8mbsnh.bfaqwwlbiiiw0kp5 \
	I0930 10:21:26.570158   14152 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:63de383190f50d46ec6dfa9942e832c15098866b42f0ccbc88cee83ba5922779 \
	I0930 10:21:26.570161   14152 kubeadm.go:310] 	--control-plane 
	I0930 10:21:26.570164   14152 kubeadm.go:310] 
	I0930 10:21:26.570167   14152 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 10:21:26.570176   14152 kubeadm.go:310] 
	I0930 10:21:26.570178   14152 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8mbsnh.bfaqwwlbiiiw0kp5 \
	I0930 10:21:26.570181   14152 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:63de383190f50d46ec6dfa9942e832c15098866b42f0ccbc88cee83ba5922779 
	I0930 10:21:26.572834   14152 cni.go:84] Creating CNI manager for ""
	I0930 10:21:26.572859   14152 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 10:21:26.574452   14152 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 10:21:26.575769   14152 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0930 10:21:26.585914   14152 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 10:21:26.586066   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube806266157 /etc/cni/net.d/1-k8s.conflist
	I0930 10:21:26.595103   14152 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 10:21:26.595166   14152 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:26.595189   14152 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_09_30T10_21_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0930 10:21:26.604853   14152 ops.go:34] apiserver oom_adj: -16
	I0930 10:21:26.661637   14152 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:27.162723   14152 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:27.662443   14152 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:28.161865   14152 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:28.662469   14152 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:29.162637   14152 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:29.662427   14152 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:30.162116   14152 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:30.662373   14152 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:31.162398   14152 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:31.662337   14152 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:31.724593   14152 kubeadm.go:1113] duration metric: took 5.129472581s to wait for elevateKubeSystemPrivileges
	I0930 10:21:31.724627   14152 kubeadm.go:394] duration metric: took 14.522219874s to StartCluster
	I0930 10:21:31.724658   14152 settings.go:142] acquiring lock: {Name:mkba5c1698050cdfa071486ada1fbbed08e1f420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:31.724730   14152 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19734-3681/kubeconfig
	I0930 10:21:31.725503   14152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-3681/kubeconfig: {Name:mka1d3ed23933c1059435012f9bcdee38f5f1e2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:31.725755   14152 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0930 10:21:31.725838   14152 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0930 10:21:31.725954   14152 addons.go:69] Setting yakd=true in profile "minikube"
	I0930 10:21:31.725954   14152 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0930 10:21:31.725971   14152 addons.go:234] Setting addon yakd=true in "minikube"
	I0930 10:21:31.725982   14152 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0930 10:21:31.726001   14152 host.go:66] Checking if "minikube" exists ...
	I0930 10:21:31.726018   14152 host.go:66] Checking if "minikube" exists ...
	I0930 10:21:31.726057   14152 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 10:21:31.726095   14152 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0930 10:21:31.726107   14152 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0930 10:21:31.726526   14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0930 10:21:31.726530   14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0930 10:21:31.726538   14152 api_server.go:166] Checking apiserver status ...
	I0930 10:21:31.726546   14152 api_server.go:166] Checking apiserver status ...
	I0930 10:21:31.726566   14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:21:31.726579   14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:21:31.726798   14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0930 10:21:31.726812   14152 api_server.go:166] Checking apiserver status ...
	I0930 10:21:31.726840   14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:21:31.727088   14152 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0930 10:21:31.727110   14152 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0930 10:21:31.727184   14152 host.go:66] Checking if "minikube" exists ...
	I0930 10:21:31.727389   14152 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0930 10:21:31.727419   14152 mustload.go:65] Loading cluster: minikube
	I0930 10:21:31.727657   14152 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 10:21:31.727803   14152 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0930 10:21:31.727820   14152 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0930 10:21:31.728285   14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0930 10:21:31.728290   14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0930 10:21:31.728297   14152 api_server.go:166] Checking apiserver status ...
	I0930 10:21:31.728302   14152 api_server.go:166] Checking apiserver status ...
	I0930 10:21:31.728323   14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:21:31.728337   14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:21:31.728455   14152 addons.go:69] Setting volcano=true in profile "minikube"
	I0930 10:21:31.728459   14152 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0930 10:21:31.728469   14152 addons.go:234] Setting addon volcano=true in "minikube"
	I0930 10:21:31.728475   14152 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0930 10:21:31.728493   14152 host.go:66] Checking if "minikube" exists ...
	I0930 10:21:31.728499   14152 host.go:66] Checking if "minikube" exists ...
	I0930 10:21:31.729107   14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0930 10:21:31.729120   14152 api_server.go:166] Checking apiserver status ...
	I0930 10:21:31.729149   14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:21:31.729172   14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0930 10:21:31.729186   14152 api_server.go:166] Checking apiserver status ...
	I0930 10:21:31.729218   14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:21:31.729826   14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0930 10:21:31.729842   14152 api_server.go:166] Checking apiserver status ...
	I0930 10:21:31.729872   14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:21:31.730353   14152 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0930 10:21:31.730520   14152 out.go:177] * Configuring local host environment ...
	I0930 10:21:31.730533   14152 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0930 10:21:31.730578   14152 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0930 10:21:31.730612   14152 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0930 10:21:31.730637   14152 host.go:66] Checking if "minikube" exists ...
	I0930 10:21:31.730708   14152 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0930 10:21:31.730813   14152 host.go:66] Checking if "minikube" exists ...
	I0930 10:21:31.731005   14152 addons.go:69] Setting registry=true in profile "minikube"
	I0930 10:21:31.731029   14152 addons.go:234] Setting addon registry=true in "minikube"
	I0930 10:21:31.731057   14152 host.go:66] Checking if "minikube" exists ...
	I0930 10:21:31.731098   14152 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0930 10:21:31.731121   14152 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0930 10:21:31.731148   14152 host.go:66] Checking if "minikube" exists ...
	I0930 10:21:31.731300   14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0930 10:21:31.731319   14152 api_server.go:166] Checking apiserver status ...
	I0930 10:21:31.731348   14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:21:31.731694   14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0930 10:21:31.731713   14152 api_server.go:166] Checking apiserver status ...
	I0930 10:21:31.731731   14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0930 10:21:31.731740   14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:21:31.731745   14152 api_server.go:166] Checking apiserver status ...
	I0930 10:21:31.731775   14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:21:31.730545   14152 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0930 10:21:31.731850   14152 host.go:66] Checking if "minikube" exists ...
	W0930 10:21:31.732026   14152 out.go:270] * 
	W0930 10:21:31.732043   14152 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	I0930 10:21:31.732048   14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0930 10:21:31.732061   14152 api_server.go:166] Checking apiserver status ...
	I0930 10:21:31.732092   14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0930 10:21:31.732051   14152 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0930 10:21:31.737306   14152 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0930 10:21:31.737338   14152 out.go:270] * 
	W0930 10:21:31.737389   14152 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0930 10:21:31.737405   14152 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0930 10:21:31.737420   14152 out.go:270] * 
	W0930 10:21:31.737445   14152 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0930 10:21:31.737699   14152 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0930 10:21:31.737728   14152 out.go:270] * 
	W0930 10:21:31.737754   14152 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0930 10:21:31.737800   14152 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 10:21:31.741393   14152 out.go:177] * Verifying Kubernetes components...
	I0930 10:21:31.742687   14152 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0930 10:21:31.747444   14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
	I0930 10:21:31.747444   14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
	I0930 10:21:31.752130   14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0930 10:21:31.752156   14152 api_server.go:166] Checking apiserver status ...
	I0930 10:21:31.752189   14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:21:31.752502   14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
	I0930 10:21:31.754451   14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
	I0930 10:21:31.757873   14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
	I0930 10:21:31.759687   14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
	I0930 10:21:31.759887   14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
	I0930 10:21:31.764456   14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
	I0930 10:21:31.768966   14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
	I0930 10:21:31.769018   14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
	I0930 10:21:31.769238   14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
	I0930 10:21:31.769285   14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
	I0930 10:21:31.771960   14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
	I0930 10:21:31.772021   14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
	I0930 10:21:31.776563   14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
	I0930 10:21:31.776605   14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
	I0930 10:21:31.776808   14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
	I0930 10:21:31.776861   14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
	I0930 10:21:31.778000   14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
	I0930 10:21:31.778961   14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
	I0930 10:21:31.778996   14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
	I0930 10:21:31.781185   14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
	I0930 10:21:31.781230   14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
	I0930 10:21:31.783576   14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
	I0930 10:21:31.785107   14152 api_server.go:204] freezer state: "THAWED"
	I0930 10:21:31.785125   14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0930 10:21:31.787568   14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
	I0930 10:21:31.787774   14152 api_server.go:204] freezer state: "THAWED"
	I0930 10:21:31.787792   14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0930 10:21:31.796468   14152 api_server.go:204] freezer state: "THAWED"
	I0930 10:21:31.796481   14152 api_server.go:204] freezer state: "THAWED"
	I0930 10:21:31.796488   14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0930 10:21:31.796498   14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0930 10:21:31.796521   14152 api_server.go:204] freezer state: "THAWED"
	I0930 10:21:31.796538   14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0930 10:21:31.796476   14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
	I0930 10:21:31.796772   14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0930 10:21:31.797605   14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
	I0930 10:21:31.797646   14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
	I0930 10:21:31.798687   14152 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0930 10:21:31.801973   14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
	I0930 10:21:31.802016   14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
	I0930 10:21:31.802647   14152 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0930 10:21:31.802710   14152 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0930 10:21:31.802911   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1355357603 /etc/kubernetes/addons/yakd-ns.yaml
	I0930 10:21:31.803409   14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0930 10:21:31.803503   14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0930 10:21:31.804801   14152 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0930 10:21:31.804848   14152 host.go:66] Checking if "minikube" exists ...
	I0930 10:21:31.805077   14152 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0930 10:21:31.805489   14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0930 10:21:31.805497   14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0930 10:21:31.805508   14152 api_server.go:166] Checking apiserver status ...
	I0930 10:21:31.805537   14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:21:31.806328   14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
	I0930 10:21:31.806346   14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0930 10:21:31.806378   14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
	I0930 10:21:31.806449   14152 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0930 10:21:31.806471   14152 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0930 10:21:31.806602   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3013348751 /etc/kubernetes/addons/ig-namespace.yaml
	I0930 10:21:31.806823   14152 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0930 10:21:31.806857   14152 host.go:66] Checking if "minikube" exists ...
	I0930 10:21:31.807607   14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0930 10:21:31.807626   14152 api_server.go:166] Checking apiserver status ...
	I0930 10:21:31.807662   14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:21:31.807826   14152 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0930 10:21:31.809266   14152 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 10:21:31.809302   14152 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 10:21:31.809424   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4238186474 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 10:21:31.814737   14152 api_server.go:204] freezer state: "THAWED"
	I0930 10:21:31.814760   14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0930 10:21:31.814878   14152 api_server.go:204] freezer state: "THAWED"
	I0930 10:21:31.814899   14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0930 10:21:31.814922   14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
	I0930 10:21:31.816374   14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
	I0930 10:21:31.816426   14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
	I0930 10:21:31.824974   14152 api_server.go:204] freezer state: "THAWED"
	I0930 10:21:31.824998   14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0930 10:21:31.826063   14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0930 10:21:31.826469   14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0930 10:21:31.827464   14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
	I0930 10:21:31.827714   14152 api_server.go:204] freezer state: "THAWED"
	I0930 10:21:31.827730   14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0930 10:21:31.827988   14152 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0930 10:21:31.828032   14152 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0930 10:21:31.829307   14152 api_server.go:204] freezer state: "THAWED"
	I0930 10:21:31.829326   14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0930 10:21:31.829736   14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0930 10:21:31.829757   14152 host.go:66] Checking if "minikube" exists ...
	I0930 10:21:31.833125   14152 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0930 10:21:31.833251   14152 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0930 10:21:31.833344   14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0930 10:21:31.833429   14152 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0930 10:21:31.833546   14152 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0930 10:21:31.833932   14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0930 10:21:31.834156   14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
	I0930 10:21:31.834246   14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
	I0930 10:21:31.834262   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2296623654 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0930 10:21:31.836538   14152 api_server.go:204] freezer state: "THAWED"
	I0930 10:21:31.836972   14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0930 10:21:31.837441   14152 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 10:21:31.837471   14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0930 10:21:31.837591   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1892949902 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 10:21:31.838979   14152 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I0930 10:21:31.839012   14152 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0930 10:21:31.840073   14152 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0930 10:21:31.840155   14152 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0930 10:21:31.842453   14152 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0930 10:21:31.842592   14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0930 10:21:31.843418   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2409048212 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0930 10:21:31.843131   14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0930 10:21:31.845268   14152 out.go:177]   - Using image docker.io/registry:2.8.3
	I0930 10:21:31.847170   14152 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0930 10:21:31.847190   14152 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0930 10:21:31.848202   14152 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0930 10:21:31.848228   14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I0930 10:21:31.848243   14152 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0930 10:21:31.848267   14152 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0930 10:21:31.848379   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3311592831 /etc/kubernetes/addons/yakd-sa.yaml
	I0930 10:21:31.848702   14152 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0930 10:21:31.848723   14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0930 10:21:31.848834   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2188874422 /etc/kubernetes/addons/registry-rc.yaml
	I0930 10:21:31.849186   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3013926792 /etc/kubernetes/addons/volcano-deployment.yaml
	I0930 10:21:31.853855   14152 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0930 10:21:31.853944   14152 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0930 10:21:31.854143   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1635928611 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0930 10:21:31.854262   14152 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0930 10:21:31.855956   14152 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0930 10:21:31.864385   14152 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0930 10:21:31.865054   14152 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 10:21:31.865086   14152 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 10:21:31.865211   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1550161587 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 10:21:31.868029   14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
	I0930 10:21:31.868080   14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
	I0930 10:21:31.883492   14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
	I0930 10:21:31.883862   14152 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0930 10:21:31.883929   14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
	I0930 10:21:31.883972   14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
	I0930 10:21:31.884280   14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0930 10:21:31.885289   14152 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0930 10:21:31.887211   14152 api_server.go:204] freezer state: "THAWED"
	I0930 10:21:31.887238   14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0930 10:21:31.887691   14152 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0930 10:21:31.887789   14152 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0930 10:21:31.887918   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube608355028 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0930 10:21:31.894421   14152 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0930 10:21:31.894451   14152 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0930 10:21:31.894563   14152 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0930 10:21:31.894564   14152 api_server.go:204] freezer state: "THAWED"
	I0930 10:21:31.894584   14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0930 10:21:31.894589   14152 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0930 10:21:31.894704   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3571105200 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0930 10:21:31.898735   14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0930 10:21:31.899990   14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0930 10:21:31.901138   14152 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 10:21:31.901160   14152 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 10:21:31.901257   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3982265590 /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 10:21:31.901649   14152 api_server.go:204] freezer state: "THAWED"
	I0930 10:21:31.901669   14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0930 10:21:31.902128   14152 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 10:21:31.903263   14152 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0930 10:21:31.904147   14152 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 10:21:31.904277   14152 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0930 10:21:31.904287   14152 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 10:21:31.904716   14152 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 10:21:31.906514   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3010322314 /etc/kubernetes/addons/ig-role.yaml
	I0930 10:21:31.906965   14152 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0930 10:21:31.917417   14152 out.go:177]   - Using image docker.io/busybox:stable
	I0930 10:21:31.917568   14152 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0930 10:21:31.918020   14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
	I0930 10:21:31.918085   14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
	I0930 10:21:31.918149   14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0930 10:21:31.918159   14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0930 10:21:31.918410   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1820589435 /etc/kubernetes/addons/registry-svc.yaml
	I0930 10:21:31.918722   14152 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0930 10:21:31.918744   14152 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0930 10:21:31.918859   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4199368668 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0930 10:21:31.919086   14152 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0930 10:21:31.919105   14152 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0930 10:21:31.919218   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube182016945 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0930 10:21:31.927542   14152 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0930 10:21:31.927567   14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0930 10:21:31.927678   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1597802008 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0930 10:21:31.927923   14152 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0930 10:21:31.928532   14152 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0930 10:21:31.928570   14152 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0930 10:21:31.928691   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1486687048 /etc/kubernetes/addons/yakd-crb.yaml
	I0930 10:21:31.930627   14152 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0930 10:21:31.930652   14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0930 10:21:31.930779   14152 api_server.go:204] freezer state: "THAWED"
	I0930 10:21:31.930798   14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0930 10:21:31.931927   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube424137532 /etc/kubernetes/addons/deployment.yaml
	I0930 10:21:31.934621   14152 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0930 10:21:31.934639   14152 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0930 10:21:31.934724   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3439996899 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0930 10:21:31.934763   14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 10:21:31.934855   14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0930 10:21:31.934895   14152 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 10:21:31.934908   14152 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0930 10:21:31.934914   14152 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0930 10:21:31.934950   14152 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0930 10:21:31.942294   14152 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0930 10:21:31.942319   14152 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0930 10:21:31.942435   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1497836714 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0930 10:21:31.948379   14152 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0930 10:21:31.948408   14152 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0930 10:21:31.948555   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3817326998 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0930 10:21:31.956403   14152 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 10:21:31.956577   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1568443454 /etc/kubernetes/addons/storageclass.yaml
	I0930 10:21:31.963670   14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0930 10:21:31.964288   14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0930 10:21:31.967375   14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 10:21:31.968471   14152 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0930 10:21:31.968491   14152 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0930 10:21:31.968608   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2918197529 /etc/kubernetes/addons/yakd-svc.yaml
	I0930 10:21:31.968640   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1444247075 /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 10:21:31.970965   14152 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0930 10:21:31.970995   14152 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0930 10:21:31.971123   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2520277398 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0930 10:21:31.974343   14152 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0930 10:21:31.974375   14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0930 10:21:31.974497   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3697893580 /etc/kubernetes/addons/registry-proxy.yaml
	I0930 10:21:31.996891   14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0930 10:21:32.000241   14152 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0930 10:21:32.000278   14152 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0930 10:21:32.000411   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2817648918 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0930 10:21:32.002142   14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 10:21:32.003458   14152 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0930 10:21:32.003483   14152 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0930 10:21:32.003601   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube285494986 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0930 10:21:32.009297   14152 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0930 10:21:32.009322   14152 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0930 10:21:32.009437   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3030886932 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0930 10:21:32.012899   14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 10:21:32.018575   14152 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0930 10:21:32.018728   14152 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0930 10:21:32.018884   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2090965801 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0930 10:21:32.027653   14152 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 10:21:32.027680   14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0930 10:21:32.027809   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2650349168 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 10:21:32.033545   14152 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0930 10:21:32.033573   14152 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0930 10:21:32.033779   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2521279534 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0930 10:21:32.040994   14152 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0930 10:21:32.041022   14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0930 10:21:32.041148   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube261155437 /etc/kubernetes/addons/yakd-dp.yaml
	I0930 10:21:32.079130   14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 10:21:32.114801   14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0930 10:21:32.130444   14152 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I0930 10:21:32.130499   14152 exec_runner.go:151] cp: inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I0930 10:21:32.130658   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube216802978 /etc/kubernetes/addons/ig-configmap.yaml
	I0930 10:21:32.166060   14152 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0930 10:21:32.166102   14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0930 10:21:32.166249   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1364075100 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0930 10:21:32.173314   14152 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0930 10:21:32.173348   14152 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0930 10:21:32.173459   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4035067505 /etc/kubernetes/addons/ig-crd.yaml
	I0930 10:21:32.179427   14152 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0930 10:21:32.235605   14152 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0930 10:21:32.235649   14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I0930 10:21:32.235806   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1048091852 /etc/kubernetes/addons/ig-daemonset.yaml
	I0930 10:21:32.236029   14152 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0930 10:21:32.236061   14152 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0930 10:21:32.236172   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3281754744 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0930 10:21:32.276753   14152 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0930 10:21:32.276786   14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0930 10:21:32.276932   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2819992982 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0930 10:21:32.296817   14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0930 10:21:32.347207   14152 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
	I0930 10:21:32.353123   14152 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
	I0930 10:21:32.353146   14152 node_ready.go:38] duration metric: took 5.901148ms for node "ubuntu-20-agent-2" to be "Ready" ...
	I0930 10:21:32.353158   14152 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 10:21:32.363747   14152 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l6kz2" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:32.462401   14152 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0930 10:21:32.462445   14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0930 10:21:32.463446   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube28210301 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0930 10:21:32.536136   14152 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0930 10:21:32.536174   14152 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0930 10:21:32.536310   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube344804646 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0930 10:21:32.568684   14152 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0930 10:21:32.629189   14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0930 10:21:32.898478   14152 addons.go:475] Verifying addon registry=true in "minikube"
	I0930 10:21:32.905179   14152 out.go:177] * Verifying registry addon...
	I0930 10:21:32.910810   14152 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0930 10:21:32.922720   14152 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0930 10:21:32.922766   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:32.979526   14152 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.015196962s)
	I0930 10:21:33.074951   14152 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0930 10:21:33.135180   14152 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.133007667s)
	I0930 10:21:33.178877   14152 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.215170848s)
	I0930 10:21:33.407442   14152 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.292587619s)
	I0930 10:21:33.411695   14152 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0930 10:21:33.415123   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:33.440046   14152 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.505240158s)
	I0930 10:21:33.440078   14152 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0930 10:21:33.568298   14152 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.271415157s)
	I0930 10:21:33.875711   14152 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.796525324s)
	W0930 10:21:33.875814   14152 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0930 10:21:33.875851   14152 retry.go:31] will retry after 343.894474ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0930 10:21:33.914583   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:34.220092   14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 10:21:34.371081   14152 pod_ready.go:103] pod "coredns-7c65d6cfc9-l6kz2" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:34.416874   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:34.916026   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:35.073669   14152 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.155474067s)
	I0930 10:21:35.378160   14152 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.748889353s)
	I0930 10:21:35.378194   14152 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0930 10:21:35.380330   14152 out.go:177] * Verifying csi-hostpath-driver addon...
	I0930 10:21:35.382823   14152 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0930 10:21:35.387406   14152 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0930 10:21:35.387435   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:35.417316   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:35.886861   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:35.914534   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:36.387853   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:36.415044   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:36.869759   14152 pod_ready.go:103] pod "coredns-7c65d6cfc9-l6kz2" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:36.888279   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:36.914215   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:37.165179   14152 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.945029873s)
	I0930 10:21:37.386847   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:37.414757   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:37.886967   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:37.914372   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:38.387715   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:38.414613   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:38.893336   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:38.893987   14152 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0930 10:21:38.894120   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2171123940 /var/lib/minikube/google_application_credentials.json
	I0930 10:21:38.894140   14152 pod_ready.go:103] pod "coredns-7c65d6cfc9-l6kz2" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:38.905222   14152 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0930 10:21:38.905346   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3307592155 /var/lib/minikube/google_cloud_project
	I0930 10:21:38.914567   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:38.915425   14152 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0930 10:21:38.915470   14152 host.go:66] Checking if "minikube" exists ...
	I0930 10:21:38.916118   14152 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0930 10:21:38.916143   14152 api_server.go:166] Checking apiserver status ...
	I0930 10:21:38.916177   14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:21:38.931589   14152 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15904/cgroup
	I0930 10:21:38.941360   14152 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538"
	I0930 10:21:38.941416   14152 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/829177e649efe4785fc7a85554e61f96c3c1df961e97b5fea4c548a8c5382538/freezer.state
	I0930 10:21:38.951157   14152 api_server.go:204] freezer state: "THAWED"
	I0930 10:21:38.951192   14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0930 10:21:38.954534   14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0930 10:21:38.954600   14152 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0930 10:21:39.010245   14152 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 10:21:39.030624   14152 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0930 10:21:39.068581   14152 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0930 10:21:39.068636   14152 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0930 10:21:39.068791   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1575989071 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0930 10:21:39.079426   14152 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0930 10:21:39.079459   14152 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0930 10:21:39.110429   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3671691026 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0930 10:21:39.122163   14152 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0930 10:21:39.122197   14152 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0930 10:21:39.122332   14152 exec_runner.go:51] Run: sudo cp -a /tmp/minikube287679987 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0930 10:21:39.154899   14152 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0930 10:21:39.387572   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:39.415256   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:39.549098   14152 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0930 10:21:39.550614   14152 out.go:177] * Verifying gcp-auth addon...
	I0930 10:21:39.552962   14152 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0930 10:21:39.555080   14152 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0930 10:21:39.869421   14152 pod_ready.go:93] pod "coredns-7c65d6cfc9-l6kz2" in "kube-system" namespace has status "Ready":"True"
	I0930 10:21:39.869449   14152 pod_ready.go:82] duration metric: took 7.505623079s for pod "coredns-7c65d6cfc9-l6kz2" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:39.869478   14152 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vkdlc" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:39.874964   14152 pod_ready.go:98] pod "coredns-7c65d6cfc9-vkdlc" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:39 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:31 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:31 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:31 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:31 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.48 HostIPs:[{IP:10.138.0.48}]
PodIP:10.244.0.4 PodIPs:[{IP:10.244.0.4}] StartTime:2024-09-30 10:21:31 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-30 10:21:32 +0000 UTC,FinishedAt:2024-09-30 10:21:38 +0000 UTC,ContainerID:docker://46ee976023817f632b988d6749abed52c67d9c4ed3b4abbc09464bded457caa4,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://46ee976023817f632b988d6749abed52c67d9c4ed3b4abbc09464bded457caa4 Started:0xc002814020 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0025adde0} {Name:kube-api-access-wcn9x MountPath:/var/run/secrets/kubernetes.io/serviceaccount R
eadOnly:true RecursiveReadOnly:0xc0025addf0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0930 10:21:39.874997   14152 pod_ready.go:82] duration metric: took 5.508029ms for pod "coredns-7c65d6cfc9-vkdlc" in "kube-system" namespace to be "Ready" ...
	E0930 10:21:39.875011   14152 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-vkdlc" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:39 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:31 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:31 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:31 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-30 10:21:31 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.
48 HostIPs:[{IP:10.138.0.48}] PodIP:10.244.0.4 PodIPs:[{IP:10.244.0.4}] StartTime:2024-09-30 10:21:31 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-30 10:21:32 +0000 UTC,FinishedAt:2024-09-30 10:21:38 +0000 UTC,ContainerID:docker://46ee976023817f632b988d6749abed52c67d9c4ed3b4abbc09464bded457caa4,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://46ee976023817f632b988d6749abed52c67d9c4ed3b4abbc09464bded457caa4 Started:0xc002814020 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0025adde0} {Name:kube-api-access-wcn9x MountPath:/var/run/secrets/k
ubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0025addf0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0930 10:21:39.875024   14152 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:39.889049   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:39.914003   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:40.386749   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:40.414767   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:40.886561   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:40.914683   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:41.386623   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:41.414927   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:41.880106   14152 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0930 10:21:41.880126   14152 pod_ready.go:82] duration metric: took 2.00509396s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:41.880135   14152 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:41.884221   14152 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0930 10:21:41.884245   14152 pod_ready.go:82] duration metric: took 4.103459ms for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:41.884259   14152 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:41.886349   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:41.888316   14152 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0930 10:21:41.888332   14152 pod_ready.go:82] duration metric: took 4.066357ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:41.888340   14152 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6zcvv" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:41.892446   14152 pod_ready.go:93] pod "kube-proxy-6zcvv" in "kube-system" namespace has status "Ready":"True"
	I0930 10:21:41.892468   14152 pod_ready.go:82] duration metric: took 4.11999ms for pod "kube-proxy-6zcvv" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:41.892479   14152 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:41.914231   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:42.268095   14152 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0930 10:21:42.268119   14152 pod_ready.go:82] duration metric: took 375.632443ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:42.268129   14152 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-6496t" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:42.388827   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:42.414683   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:42.666957   14152 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-6496t" in "kube-system" namespace has status "Ready":"True"
	I0930 10:21:42.666979   14152 pod_ready.go:82] duration metric: took 398.843778ms for pod "nvidia-device-plugin-daemonset-6496t" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:42.666987   14152 pod_ready.go:39] duration metric: took 10.313817404s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 10:21:42.667002   14152 api_server.go:52] waiting for apiserver process to appear ...
	I0930 10:21:42.667050   14152 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:21:42.683903   14152 api_server.go:72] duration metric: took 10.944751852s to wait for apiserver process to appear ...
	I0930 10:21:42.683923   14152 api_server.go:88] waiting for apiserver healthz status ...
	I0930 10:21:42.683944   14152 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0930 10:21:42.687306   14152 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0930 10:21:42.688067   14152 api_server.go:141] control plane version: v1.31.1
	I0930 10:21:42.688090   14152 api_server.go:131] duration metric: took 4.159551ms to wait for apiserver health ...
	I0930 10:21:42.688100   14152 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 10:21:42.872150   14152 system_pods.go:59] 16 kube-system pods found
	I0930 10:21:42.872181   14152 system_pods.go:61] "coredns-7c65d6cfc9-l6kz2" [8c9f80b9-eea9-44a8-815c-69b4dcceecf9] Running
	I0930 10:21:42.872199   14152 system_pods.go:61] "csi-hostpath-attacher-0" [8baace2a-d4f6-46fc-906b-a4fb78e8e517] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0930 10:21:42.872208   14152 system_pods.go:61] "csi-hostpath-resizer-0" [affbdd42-4430-4ff1-a978-941e66701b22] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0930 10:21:42.872219   14152 system_pods.go:61] "csi-hostpathplugin-6dwlc" [ff93f1cc-212a-41e5-be3e-db0842b636c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0930 10:21:42.872225   14152 system_pods.go:61] "etcd-ubuntu-20-agent-2" [60eb3919-ff73-4033-b678-1f1dc0a96b49] Running
	I0930 10:21:42.872231   14152 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [000c5b05-949f-4593-88f6-17f1d9d1342a] Running
	I0930 10:21:42.872240   14152 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [7a0e0e79-569c-4842-8d03-c2f0d5aa842c] Running
	I0930 10:21:42.872246   14152 system_pods.go:61] "kube-proxy-6zcvv" [fde2c7b5-3e42-48a5-9b2a-670fc6e8e59f] Running
	I0930 10:21:42.872254   14152 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [0f7cf227-109f-4118-8f0d-13b56957b763] Running
	I0930 10:21:42.872262   14152 system_pods.go:61] "metrics-server-84c5f94fbc-k6tlb" [9dab8e12-be75-43d4-b706-334dbdb7b9c3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 10:21:42.872271   14152 system_pods.go:61] "nvidia-device-plugin-daemonset-6496t" [5671e02b-bae3-433f-98c5-56b427f3e666] Running
	I0930 10:21:42.872277   14152 system_pods.go:61] "registry-66c9cd494c-49nkl" [f279ea6c-0d65-4d94-9dc1-43ba6d130381] Running
	I0930 10:21:42.872284   14152 system_pods.go:61] "registry-proxy-4lsgw" [3bd51464-305d-4990-aed6-cb08ea16c1b9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0930 10:21:42.872294   14152 system_pods.go:61] "snapshot-controller-56fcc65765-4tnvr" [b8ea90d4-4a6a-4c29-b153-af3b944a30d3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 10:21:42.872306   14152 system_pods.go:61] "snapshot-controller-56fcc65765-9sn5g" [9c07447c-6747-44fc-959a-b5b2e5744ca4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 10:21:42.872315   14152 system_pods.go:61] "storage-provisioner" [5a4ed029-4ecb-43ef-a31f-410f1039bb84] Running
	I0930 10:21:42.872324   14152 system_pods.go:74] duration metric: took 184.217541ms to wait for pod list to return data ...
	I0930 10:21:42.872336   14152 default_sa.go:34] waiting for default service account to be created ...
	I0930 10:21:42.887082   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:42.914984   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:43.075588   14152 default_sa.go:45] found service account: "default"
	I0930 10:21:43.075646   14152 default_sa.go:55] duration metric: took 203.274919ms for default service account to be created ...
	I0930 10:21:43.075660   14152 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 10:21:43.272061   14152 system_pods.go:86] 16 kube-system pods found
	I0930 10:21:43.272087   14152 system_pods.go:89] "coredns-7c65d6cfc9-l6kz2" [8c9f80b9-eea9-44a8-815c-69b4dcceecf9] Running
	I0930 10:21:43.272095   14152 system_pods.go:89] "csi-hostpath-attacher-0" [8baace2a-d4f6-46fc-906b-a4fb78e8e517] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0930 10:21:43.272101   14152 system_pods.go:89] "csi-hostpath-resizer-0" [affbdd42-4430-4ff1-a978-941e66701b22] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0930 10:21:43.272108   14152 system_pods.go:89] "csi-hostpathplugin-6dwlc" [ff93f1cc-212a-41e5-be3e-db0842b636c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0930 10:21:43.272112   14152 system_pods.go:89] "etcd-ubuntu-20-agent-2" [60eb3919-ff73-4033-b678-1f1dc0a96b49] Running
	I0930 10:21:43.272116   14152 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [000c5b05-949f-4593-88f6-17f1d9d1342a] Running
	I0930 10:21:43.272121   14152 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [7a0e0e79-569c-4842-8d03-c2f0d5aa842c] Running
	I0930 10:21:43.272124   14152 system_pods.go:89] "kube-proxy-6zcvv" [fde2c7b5-3e42-48a5-9b2a-670fc6e8e59f] Running
	I0930 10:21:43.272128   14152 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [0f7cf227-109f-4118-8f0d-13b56957b763] Running
	I0930 10:21:43.272133   14152 system_pods.go:89] "metrics-server-84c5f94fbc-k6tlb" [9dab8e12-be75-43d4-b706-334dbdb7b9c3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 10:21:43.272139   14152 system_pods.go:89] "nvidia-device-plugin-daemonset-6496t" [5671e02b-bae3-433f-98c5-56b427f3e666] Running
	I0930 10:21:43.272143   14152 system_pods.go:89] "registry-66c9cd494c-49nkl" [f279ea6c-0d65-4d94-9dc1-43ba6d130381] Running
	I0930 10:21:43.272152   14152 system_pods.go:89] "registry-proxy-4lsgw" [3bd51464-305d-4990-aed6-cb08ea16c1b9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0930 10:21:43.272157   14152 system_pods.go:89] "snapshot-controller-56fcc65765-4tnvr" [b8ea90d4-4a6a-4c29-b153-af3b944a30d3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 10:21:43.272163   14152 system_pods.go:89] "snapshot-controller-56fcc65765-9sn5g" [9c07447c-6747-44fc-959a-b5b2e5744ca4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 10:21:43.272166   14152 system_pods.go:89] "storage-provisioner" [5a4ed029-4ecb-43ef-a31f-410f1039bb84] Running
	I0930 10:21:43.272173   14152 system_pods.go:126] duration metric: took 196.507996ms to wait for k8s-apps to be running ...
	I0930 10:21:43.272182   14152 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 10:21:43.272221   14152 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0930 10:21:43.284299   14152 system_svc.go:56] duration metric: took 12.107947ms WaitForService to wait for kubelet
	I0930 10:21:43.284324   14152 kubeadm.go:582] duration metric: took 11.54517909s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 10:21:43.284341   14152 node_conditions.go:102] verifying NodePressure condition ...
	I0930 10:21:43.387174   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:43.413986   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:43.468658   14152 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0930 10:21:43.468686   14152 node_conditions.go:123] node cpu capacity is 8
	I0930 10:21:43.468700   14152 node_conditions.go:105] duration metric: took 184.353931ms to run NodePressure ...
	I0930 10:21:43.468713   14152 start.go:241] waiting for startup goroutines ...
	I0930 10:21:43.887566   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:43.914626   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:44.387061   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:44.414212   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:44.887302   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:44.914239   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:45.387498   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:45.414401   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:45.888166   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:45.914623   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:46.387873   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:46.413653   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:46.888553   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:46.914793   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:47.386706   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:47.415445   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:47.887357   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:47.914499   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:48.387088   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:48.414089   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:48.887238   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:48.914475   14152 kapi.go:107] duration metric: took 16.003667595s to wait for kubernetes.io/minikube-addons=registry ...
	I0930 10:21:49.387820   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:49.887870   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:50.390080   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:50.887211   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:51.387001   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:51.887356   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:52.388486   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:52.888370   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:53.387314   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:53.887933   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:54.387564   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:54.887466   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:55.388456   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:55.887949   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:56.387536   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:56.887445   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:57.387827   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:57.887791   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:58.387647   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:58.890090   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:59.386963   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:59.888054   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:00.414086   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:00.887718   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:01.387129   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:01.887646   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:02.388034   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:02.887363   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:03.387921   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:03.921339   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:04.387348   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:04.886806   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:05.388074   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:05.887632   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:06.476475   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:06.887462   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:07.386764   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:07.888203   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:08.387043   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:08.887380   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:09.387364   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:09.886516   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:10.387144   14152 kapi.go:107] duration metric: took 35.004322995s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0930 10:22:21.056369   14152 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0930 10:22:21.056388   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:21.556702   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:22.057126   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:22.556498   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:23.056398   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:23.556642   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:24.056477   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:24.556553   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:25.056344   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:25.556740   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:26.056588   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:26.556915   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:27.056079   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:27.556138   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:28.055755   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:28.555764   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:29.056644   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:29.556533   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:30.056334   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:30.556625   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:31.056506   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:31.556687   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:32.056094   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:32.556298   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:33.056080   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:33.556233   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:34.056285   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:34.556758   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:35.056677   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:35.556632   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:36.056502   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:36.556981   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:37.056121   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:37.555730   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:38.056569   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:38.557094   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:39.055853   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:39.555801   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:40.055494   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:40.556678   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:41.056533   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:41.557027   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:42.056223   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:42.556330   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:43.055798   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:43.556492   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:44.056837   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:44.557126   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:45.056064   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:45.556295   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:46.055982   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:46.556062   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:47.056232   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:47.555993   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:48.055824   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:48.556099   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:49.056028   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:49.556018   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:50.055709   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:50.556839   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:51.055741   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:51.556974   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:52.056429   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:52.557247   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:53.056184   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:53.556889   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:54.055609   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:54.556482   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:55.056927   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:55.556170   14152 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:22:56.056244   14152 kapi.go:107] duration metric: took 1m16.503281291s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0930 10:22:56.058066   14152 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0930 10:22:56.059421   14152 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0930 10:22:56.060906   14152 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0930 10:22:56.062295   14152 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, cloud-spanner, storage-provisioner, storage-provisioner-rancher, yakd, metrics-server, inspektor-gadget, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0930 10:22:56.063572   14152 addons.go:510] duration metric: took 1m24.337731848s for enable addons: enabled=[nvidia-device-plugin default-storageclass cloud-spanner storage-provisioner storage-provisioner-rancher yakd metrics-server inspektor-gadget volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0930 10:22:56.063619   14152 start.go:246] waiting for cluster config update ...
	I0930 10:22:56.063635   14152 start.go:255] writing updated cluster config ...
	I0930 10:22:56.063877   14152 exec_runner.go:51] Run: rm -f paused
	I0930 10:22:56.107264   14152 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 10:22:56.109227   14152 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Mon 2024-08-19 17:40:18 UTC, end at Mon 2024-09-30 10:32:47 UTC. --
	Sep 30 10:23:51 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:23:51.152170675Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=c725380adcbf0519 traceID=48d6c4a189c6ff22302e4d9f6e51e976
	Sep 30 10:23:51 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:23:51.154559396Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=c725380adcbf0519 traceID=48d6c4a189c6ff22302e4d9f6e51e976
	Sep 30 10:23:54 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:23:54.100954293Z" level=info msg="Container failed to exit within 30s of signal 3 - using the force" container=446753e1dbd953f28ff5a38d76a0c59eb2361967a10e898873aa5da832b7fd83 spanID=77213cd02bf55884 traceID=c639da10f8cf5f0b2b4cc2cd5761778d
	Sep 30 10:23:54 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:23:54.120697356Z" level=info msg="ignoring event" container=446753e1dbd953f28ff5a38d76a0c59eb2361967a10e898873aa5da832b7fd83 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:23:54 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:23:54.246742173Z" level=info msg="ignoring event" container=226312f846e45b5af3e224d96d56822ab3610315c2e1a583e7ec2271322969dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:24:16 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:24:16.149232628Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=5c1158a38c234ed4 traceID=7582d0044b1171e5bed87fe3b5e1089e
	Sep 30 10:24:16 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:24:16.151509823Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=5c1158a38c234ed4 traceID=7582d0044b1171e5bed87fe3b5e1089e
	Sep 30 10:24:57 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:24:57.152950035Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=4deb59e3d585f351 traceID=004ea83b82c811b58b74856797c33229
	Sep 30 10:24:57 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:24:57.155044385Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=4deb59e3d585f351 traceID=004ea83b82c811b58b74856797c33229
	Sep 30 10:26:28 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:26:28.167760918Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=9c5265e26501244c traceID=7fe14e20482f9144417a6de48dd1603d
	Sep 30 10:26:28 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:26:28.170085159Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=9c5265e26501244c traceID=7fe14e20482f9144417a6de48dd1603d
	Sep 30 10:29:14 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:29:14.158227679Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=1fdb7dd66d5d39ec traceID=40288c81b041285da5047257a1908e1e
	Sep 30 10:29:14 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:29:14.160726753Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=1fdb7dd66d5d39ec traceID=40288c81b041285da5047257a1908e1e
	Sep 30 10:31:47 ubuntu-20-agent-2 cri-dockerd[15137]: time="2024-09-30T10:31:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/053317de53afc515d061cd0225fdeb9d78a0cc0484b18ca973a72ddb3d6100bb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 30 10:31:47 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:31:47.419726434Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=5db194501f01e803 traceID=aa4eee7aa649e59274183bad5b2875ad
	Sep 30 10:31:47 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:31:47.421972810Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=5db194501f01e803 traceID=aa4eee7aa649e59274183bad5b2875ad
	Sep 30 10:32:03 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:32:03.155350329Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=cf227ef6a0193812 traceID=d98514dd854b9e5f6abed4e865959477
	Sep 30 10:32:03 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:32:03.157451111Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=cf227ef6a0193812 traceID=d98514dd854b9e5f6abed4e865959477
	Sep 30 10:32:32 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:32:32.147733631Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=d0127899ce499d03 traceID=4565e20d393b06e158ee815dbef9ea4b
	Sep 30 10:32:32 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:32:32.149880727Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=d0127899ce499d03 traceID=4565e20d393b06e158ee815dbef9ea4b
	Sep 30 10:32:46 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:32:46.883672659Z" level=info msg="ignoring event" container=053317de53afc515d061cd0225fdeb9d78a0cc0484b18ca973a72ddb3d6100bb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:32:47 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:32:47.143441131Z" level=info msg="ignoring event" container=ed026841d206ee4d6c271923ab6a6f79bc3211a1b50bb5ef7e4ec11001f82305 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:32:47 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:32:47.205040928Z" level=info msg="ignoring event" container=20eade86f3b833474d104bb142c2820360a92948d574d651f5937253180d4500 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:32:47 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:32:47.278578428Z" level=info msg="ignoring event" container=e6aa8711c0f3b3400e09abe872b72f1e773279268095a5de6aee87c11c464933 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:32:47 ubuntu-20-agent-2 dockerd[14386]: time="2024-09-30T10:32:47.367901138Z" level=info msg="ignoring event" container=00bfe4005c12f489a53e983131546cdafc15f2e7857d73beb19e7a31277b2c77 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	77d3a63a4499d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   dde583950cee8       gcp-auth-89d5ffd79-ncvcd
	2715aeb1faecb       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          10 minutes ago      Running             csi-snapshotter                          0                   78fbc418d73c9       csi-hostpathplugin-6dwlc
	445379e1f78c2       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          10 minutes ago      Running             csi-provisioner                          0                   78fbc418d73c9       csi-hostpathplugin-6dwlc
	fb526e3386c16       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            10 minutes ago      Running             liveness-probe                           0                   78fbc418d73c9       csi-hostpathplugin-6dwlc
	151aa322ca049       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           10 minutes ago      Running             hostpath                                 0                   78fbc418d73c9       csi-hostpathplugin-6dwlc
	16030347ece15       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                10 minutes ago      Running             node-driver-registrar                    0                   78fbc418d73c9       csi-hostpathplugin-6dwlc
	1ff3076f030d2       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   10 minutes ago      Running             csi-external-health-monitor-controller   0                   78fbc418d73c9       csi-hostpathplugin-6dwlc
	4b3a822c6afae       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             10 minutes ago      Running             csi-attacher                             0                   15f462964a9e7       csi-hostpath-attacher-0
	8e1cb05130fa6       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              10 minutes ago      Running             csi-resizer                              0                   3a95adbe7ba81       csi-hostpath-resizer-0
	a4b3036371375       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   3b693fa000d24       snapshot-controller-56fcc65765-9sn5g
	3d96127fdb7f6       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   e202dc2df4c12       snapshot-controller-56fcc65765-4tnvr
	2cfb8b31a2a3f       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        10 minutes ago      Running             metrics-server                           0                   59c4aa91b3d85       metrics-server-84c5f94fbc-k6tlb
	a8fde3e4664b0       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        10 minutes ago      Running             yakd                                     0                   69560888b64c5       yakd-dashboard-67d98fc6b-zp446
	eb6fea0098023       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            10 minutes ago      Running             gadget                                   0                   0a0ef867c54cb       gadget-gnw75
	275554e8ee343       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       11 minutes ago      Running             local-path-provisioner                   0                   a35558757f855       local-path-provisioner-86d989889c-gfrcs
	7a6d235f7418d       gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e                               11 minutes ago      Running             cloud-spanner-emulator                   0                   98cb99fb801c1       cloud-spanner-emulator-5b584cc74-56ngp
	fc058dacc50a9       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     11 minutes ago      Running             nvidia-device-plugin-ctr                 0                   0449a34351598       nvidia-device-plugin-daemonset-6496t
	787ae08a32416       6e38f40d628db                                                                                                                                11 minutes ago      Running             storage-provisioner                      0                   2e20d6e48bd87       storage-provisioner
	813e6bc8f907c       c69fa2e9cbf5f                                                                                                                                11 minutes ago      Running             coredns                                  0                   366d678fb0d89       coredns-7c65d6cfc9-l6kz2
	f353b443fe2db       60c005f310ff3                                                                                                                                11 minutes ago      Running             kube-proxy                               0                   21b50e65d5600       kube-proxy-6zcvv
	bfc730f14072e       2e96e5913fc06                                                                                                                                11 minutes ago      Running             etcd                                     0                   97839c430679a       etcd-ubuntu-20-agent-2
	829177e649efe       6bab7719df100                                                                                                                                11 minutes ago      Running             kube-apiserver                           0                   256a6621c886c       kube-apiserver-ubuntu-20-agent-2
	ded343e5a89cc       9aa1fad941575                                                                                                                                11 minutes ago      Running             kube-scheduler                           0                   b5b4a6390da5d       kube-scheduler-ubuntu-20-agent-2
	1279df1159b2f       175ffd71cce3d                                                                                                                                11 minutes ago      Running             kube-controller-manager                  0                   fdafef1c1cd9f       kube-controller-manager-ubuntu-20-agent-2
	
	
	==> coredns [813e6bc8f907] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	[INFO] Reloading complete
	[INFO] 127.0.0.1:60216 - 7481 "HINFO IN 3248328176289781825.6718732232653688194. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.036004084s
	[INFO] 10.244.0.23:51311 - 33272 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000251243s
	[INFO] 10.244.0.23:42563 - 41308 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000353657s
	[INFO] 10.244.0.23:49151 - 33869 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00009722s
	[INFO] 10.244.0.23:51011 - 17223 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000104725s
	[INFO] 10.244.0.23:32768 - 60037 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000152797s
	[INFO] 10.244.0.23:52965 - 35283 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000146158s
	[INFO] 10.244.0.23:59149 - 34124 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004172305s
	[INFO] 10.244.0.23:34492 - 48188 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004230952s
	[INFO] 10.244.0.23:52366 - 50679 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003444693s
	[INFO] 10.244.0.23:49279 - 38702 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003611762s
	[INFO] 10.244.0.23:53116 - 12018 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003551489s
	[INFO] 10.244.0.23:59171 - 58768 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.008509604s
	[INFO] 10.244.0.23:40764 - 38481 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002620262s
	[INFO] 10.244.0.23:54666 - 36232 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.002805743s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T10_21_26_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-2
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-2"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 10:21:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 10:32:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 10:28:34 +0000   Mon, 30 Sep 2024 10:21:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 10:28:34 +0000   Mon, 30 Sep 2024 10:21:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 10:28:34 +0000   Mon, 30 Sep 2024 10:21:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 10:28:34 +0000   Mon, 30 Sep 2024 10:21:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    ef9eed15-051c-4afe-8634-23d275b24342
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  default                     cloud-spanner-emulator-5b584cc74-56ngp       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-gnw75                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-ncvcd                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-l6kz2                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-6dwlc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-6zcvv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-k6tlb              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-6496t         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-4tnvr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-9sn5g         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-gfrcs      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-zp446               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x7 over 11m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 96 1f df c1 20 21 08 06
	[  +0.012234] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7a 7f cf 3e b3 f1 08 06
	[  +2.637486] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a bc c5 bb 1d c6 08 06
	[  +1.474121] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 98 60 f4 2d 64 08 06
	[Sep30 10:22] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e a7 31 e9 7b bc 08 06
	[  +4.437769] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 6c ca d8 fe ac 08 06
	[  +0.269653] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 a2 1c 11 ee 45 08 06
	[  +0.149081] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 3a c0 7f 02 cb 08 06
	[  +1.283154] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e 5b ce 3f 31 1a 08 06
	[ +35.911599] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e 90 c2 a2 91 4a 08 06
	[  +0.020153] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 52 05 90 ac 35 08 06
	[ +11.234903] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 fa 93 fb 23 2f 08 06
	[  +0.000451] IPv4: martian source 10.244.0.23 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 9a dc 54 7a c1 80 08 06
	
	
	==> etcd [bfc730f14072] <==
	{"level":"info","ts":"2024-09-30T10:21:22.840570Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-30T10:21:22.840637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-30T10:21:22.840667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgPreVoteResp from 6b435b960bec7c3c at term 1"}
	{"level":"info","ts":"2024-09-30T10:21:22.840684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became candidate at term 2"}
	{"level":"info","ts":"2024-09-30T10:21:22.840696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-30T10:21:22.840712Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 2"}
	{"level":"info","ts":"2024-09-30T10:21:22.840726Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-30T10:21:22.841556Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T10:21:22.842131Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T10:21:22.842135Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-30T10:21:22.842158Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T10:21:22.842359Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-30T10:21:22.842427Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-30T10:21:22.842458Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T10:21:22.842527Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T10:21:22.842549Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T10:21:22.843194Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T10:21:22.843227Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T10:21:22.843978Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-30T10:21:22.843997Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-09-30T10:21:34.862375Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.293496ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/csi-attacher\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-30T10:21:34.862463Z","caller":"traceutil/trace.go:171","msg":"trace[867252267] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-attacher; range_end:; response_count:0; response_revision:724; }","duration":"106.395178ms","start":"2024-09-30T10:21:34.756056Z","end":"2024-09-30T10:21:34.862451Z","steps":["trace[867252267] 'agreement among raft nodes before linearized reading'  (duration: 106.266644ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-30T10:31:22.861908Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1694}
	{"level":"info","ts":"2024-09-30T10:31:22.885267Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1694,"took":"22.883896ms","hash":3580243840,"current-db-size-bytes":8245248,"current-db-size":"8.2 MB","current-db-size-in-use-bytes":4165632,"current-db-size-in-use":"4.2 MB"}
	{"level":"info","ts":"2024-09-30T10:31:22.885310Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3580243840,"revision":1694,"compact-revision":-1}
	
	
	==> gcp-auth [77d3a63a4499] <==
	2024/09/30 10:22:55 GCP Auth Webhook started!
	2024/09/30 10:23:11 Ready to marshal response ...
	2024/09/30 10:23:11 Ready to write response ...
	2024/09/30 10:23:12 Ready to marshal response ...
	2024/09/30 10:23:12 Ready to write response ...
	2024/09/30 10:23:34 Ready to marshal response ...
	2024/09/30 10:23:34 Ready to write response ...
	2024/09/30 10:23:34 Ready to marshal response ...
	2024/09/30 10:23:34 Ready to write response ...
	2024/09/30 10:23:34 Ready to marshal response ...
	2024/09/30 10:23:34 Ready to write response ...
	2024/09/30 10:31:46 Ready to marshal response ...
	2024/09/30 10:31:46 Ready to write response ...
	
	
	==> kernel <==
	 10:32:47 up 15 min,  0 users,  load average: 0.32, 0.75, 0.52
	Linux ubuntu-20-agent-2 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [829177e649ef] <==
	W0930 10:22:13.859640       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.131.47:443: connect: connection refused
	W0930 10:22:20.564751       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.177.15:443: connect: connection refused
	E0930 10:22:20.564784       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.177.15:443: connect: connection refused" logger="UnhandledError"
	W0930 10:22:42.572196       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.177.15:443: connect: connection refused
	E0930 10:22:42.572229       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.177.15:443: connect: connection refused" logger="UnhandledError"
	W0930 10:22:42.581501       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.177.15:443: connect: connection refused
	E0930 10:22:42.581533       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.177.15:443: connect: connection refused" logger="UnhandledError"
	I0930 10:23:11.372229       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0930 10:23:11.387369       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0930 10:23:23.785822       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0930 10:23:23.806841       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0930 10:23:23.924090       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0930 10:23:23.925899       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0930 10:23:23.925952       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0930 10:23:23.991148       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0930 10:23:24.073457       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0930 10:23:24.100494       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0930 10:23:24.182098       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0930 10:23:24.955804       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0930 10:23:24.992036       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0930 10:23:25.106367       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0930 10:23:25.106986       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0930 10:23:25.182336       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0930 10:23:25.182344       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0930 10:23:25.356355       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	
	
	==> kube-controller-manager [1279df1159b2] <==
	W0930 10:31:29.795582       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:31:29.795623       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:31:30.359398       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:31:30.359439       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:31:38.115780       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:31:38.115823       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:31:45.683174       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:31:45.683233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:31:49.995118       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:31:49.995162       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:31:56.465375       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:31:56.465424       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:32:02.155503       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:32:02.155545       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:32:24.999175       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:32:24.999228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:32:27.958873       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:32:27.958919       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:32:30.092646       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:32:30.092691       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:32:40.629001       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:32:40.629045       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:32:41.395056       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:32:41.395100       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0930 10:32:47.107599       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="11.091µs"
	
	
	==> kube-proxy [f353b443fe2d] <==
	I0930 10:21:32.450974       1 server_linux.go:66] "Using iptables proxy"
	I0930 10:21:32.765348       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0930 10:21:32.765432       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 10:21:32.875103       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0930 10:21:32.875165       1 server_linux.go:169] "Using iptables Proxier"
	I0930 10:21:32.878099       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 10:21:32.878448       1 server.go:483] "Version info" version="v1.31.1"
	I0930 10:21:32.878477       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 10:21:32.880594       1 config.go:199] "Starting service config controller"
	I0930 10:21:32.880621       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 10:21:32.880661       1 config.go:105] "Starting endpoint slice config controller"
	I0930 10:21:32.880667       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 10:21:32.881115       1 config.go:328] "Starting node config controller"
	I0930 10:21:32.881122       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 10:21:32.981720       1 shared_informer.go:320] Caches are synced for node config
	I0930 10:21:32.981766       1 shared_informer.go:320] Caches are synced for service config
	I0930 10:21:32.981798       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ded343e5a89c] <==
	E0930 10:21:23.719727       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:23.719642       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0930 10:21:23.719829       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0930 10:21:23.719850       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:23.719677       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0930 10:21:23.719888       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:23.719653       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0930 10:21:23.719918       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:23.719741       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0930 10:21:23.719954       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:23.719820       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0930 10:21:23.719982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:24.588445       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0930 10:21:24.588486       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0930 10:21:24.629142       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0930 10:21:24.629183       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:24.682723       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0930 10:21:24.682765       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:24.686030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0930 10:21:24.686070       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:24.698291       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0930 10:21:24.698334       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:24.758944       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0930 10:21:24.758996       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0930 10:21:26.717350       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Mon 2024-08-19 17:40:18 UTC, end at Mon 2024-09-30 10:32:48 UTC. --
	Sep 30 10:32:37 ubuntu-20-agent-2 kubelet[16031]: E0930 10:32:37.009566   16031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="b0e36b4a-71a7-4915-8c32-b4be6cd9aa5a"
	Sep 30 10:32:43 ubuntu-20-agent-2 kubelet[16031]: E0930 10:32:43.009871   16031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="476ee8f0-d7e7-4a87-9b58-c2082f236775"
	Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.043730   16031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/476ee8f0-d7e7-4a87-9b58-c2082f236775-gcp-creds\") pod \"476ee8f0-d7e7-4a87-9b58-c2082f236775\" (UID: \"476ee8f0-d7e7-4a87-9b58-c2082f236775\") "
	Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.043802   16031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd6lf\" (UniqueName: \"kubernetes.io/projected/476ee8f0-d7e7-4a87-9b58-c2082f236775-kube-api-access-fd6lf\") pod \"476ee8f0-d7e7-4a87-9b58-c2082f236775\" (UID: \"476ee8f0-d7e7-4a87-9b58-c2082f236775\") "
	Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.043812   16031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/476ee8f0-d7e7-4a87-9b58-c2082f236775-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "476ee8f0-d7e7-4a87-9b58-c2082f236775" (UID: "476ee8f0-d7e7-4a87-9b58-c2082f236775"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.043906   16031 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/476ee8f0-d7e7-4a87-9b58-c2082f236775-gcp-creds\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.045538   16031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/476ee8f0-d7e7-4a87-9b58-c2082f236775-kube-api-access-fd6lf" (OuterVolumeSpecName: "kube-api-access-fd6lf") pod "476ee8f0-d7e7-4a87-9b58-c2082f236775" (UID: "476ee8f0-d7e7-4a87-9b58-c2082f236775"). InnerVolumeSpecName "kube-api-access-fd6lf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.144495   16031 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fd6lf\" (UniqueName: \"kubernetes.io/projected/476ee8f0-d7e7-4a87-9b58-c2082f236775-kube-api-access-fd6lf\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.445994   16031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpqn5\" (UniqueName: \"kubernetes.io/projected/f279ea6c-0d65-4d94-9dc1-43ba6d130381-kube-api-access-vpqn5\") pod \"f279ea6c-0d65-4d94-9dc1-43ba6d130381\" (UID: \"f279ea6c-0d65-4d94-9dc1-43ba6d130381\") "
	Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.448129   16031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f279ea6c-0d65-4d94-9dc1-43ba6d130381-kube-api-access-vpqn5" (OuterVolumeSpecName: "kube-api-access-vpqn5") pod "f279ea6c-0d65-4d94-9dc1-43ba6d130381" (UID: "f279ea6c-0d65-4d94-9dc1-43ba6d130381"). InnerVolumeSpecName "kube-api-access-vpqn5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.546849   16031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjqwt\" (UniqueName: \"kubernetes.io/projected/3bd51464-305d-4990-aed6-cb08ea16c1b9-kube-api-access-qjqwt\") pod \"3bd51464-305d-4990-aed6-cb08ea16c1b9\" (UID: \"3bd51464-305d-4990-aed6-cb08ea16c1b9\") "
	Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.547080   16031 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-vpqn5\" (UniqueName: \"kubernetes.io/projected/f279ea6c-0d65-4d94-9dc1-43ba6d130381-kube-api-access-vpqn5\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.548881   16031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd51464-305d-4990-aed6-cb08ea16c1b9-kube-api-access-qjqwt" (OuterVolumeSpecName: "kube-api-access-qjqwt") pod "3bd51464-305d-4990-aed6-cb08ea16c1b9" (UID: "3bd51464-305d-4990-aed6-cb08ea16c1b9"). InnerVolumeSpecName "kube-api-access-qjqwt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.647250   16031 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qjqwt\" (UniqueName: \"kubernetes.io/projected/3bd51464-305d-4990-aed6-cb08ea16c1b9-kube-api-access-qjqwt\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.653539   16031 scope.go:117] "RemoveContainer" containerID="ed026841d206ee4d6c271923ab6a6f79bc3211a1b50bb5ef7e4ec11001f82305"
	Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.670614   16031 scope.go:117] "RemoveContainer" containerID="ed026841d206ee4d6c271923ab6a6f79bc3211a1b50bb5ef7e4ec11001f82305"
	Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: E0930 10:32:47.671384   16031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: ed026841d206ee4d6c271923ab6a6f79bc3211a1b50bb5ef7e4ec11001f82305" containerID="ed026841d206ee4d6c271923ab6a6f79bc3211a1b50bb5ef7e4ec11001f82305"
	Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.671424   16031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"ed026841d206ee4d6c271923ab6a6f79bc3211a1b50bb5ef7e4ec11001f82305"} err="failed to get container status \"ed026841d206ee4d6c271923ab6a6f79bc3211a1b50bb5ef7e4ec11001f82305\": rpc error: code = Unknown desc = Error response from daemon: No such container: ed026841d206ee4d6c271923ab6a6f79bc3211a1b50bb5ef7e4ec11001f82305"
	Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.671452   16031 scope.go:117] "RemoveContainer" containerID="20eade86f3b833474d104bb142c2820360a92948d574d651f5937253180d4500"
	Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.687273   16031 scope.go:117] "RemoveContainer" containerID="20eade86f3b833474d104bb142c2820360a92948d574d651f5937253180d4500"
	Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: E0930 10:32:47.688296   16031 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 20eade86f3b833474d104bb142c2820360a92948d574d651f5937253180d4500" containerID="20eade86f3b833474d104bb142c2820360a92948d574d651f5937253180d4500"
	Sep 30 10:32:47 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:47.688513   16031 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"20eade86f3b833474d104bb142c2820360a92948d574d651f5937253180d4500"} err="failed to get container status \"20eade86f3b833474d104bb142c2820360a92948d574d651f5937253180d4500\": rpc error: code = Unknown desc = Error response from daemon: No such container: 20eade86f3b833474d104bb142c2820360a92948d574d651f5937253180d4500"
	Sep 30 10:32:48 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:48.017988   16031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bd51464-305d-4990-aed6-cb08ea16c1b9" path="/var/lib/kubelet/pods/3bd51464-305d-4990-aed6-cb08ea16c1b9/volumes"
	Sep 30 10:32:48 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:48.018312   16031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="476ee8f0-d7e7-4a87-9b58-c2082f236775" path="/var/lib/kubelet/pods/476ee8f0-d7e7-4a87-9b58-c2082f236775/volumes"
	Sep 30 10:32:48 ubuntu-20-agent-2 kubelet[16031]: I0930 10:32:48.018500   16031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f279ea6c-0d65-4d94-9dc1-43ba6d130381" path="/var/lib/kubelet/pods/f279ea6c-0d65-4d94-9dc1-43ba6d130381/volumes"
	
	
	==> storage-provisioner [787ae08a3241] <==
	I0930 10:21:34.216352       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0930 10:21:34.233408       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0930 10:21:34.233460       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0930 10:21:34.246641       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0930 10:21:34.246864       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_51215cdf-210d-471b-a636-de14d21ab3dc!
	I0930 10:21:34.248241       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"12f047f2-6055-43ed-8ded-62a38c2a34fb", APIVersion:"v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_51215cdf-210d-471b-a636-de14d21ab3dc became leader
	I0930 10:21:34.347468       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_51215cdf-210d-471b-a636-de14d21ab3dc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ubuntu-20-agent-2/10.138.0.48
	Start Time:       Mon, 30 Sep 2024 10:23:34 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kjmxj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kjmxj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m14s                  default-scheduler  Successfully assigned default/busybox to ubuntu-20-agent-2
	  Normal   Pulling    7m51s (x4 over 9m13s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m51s (x4 over 9m13s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m51s (x4 over 9m13s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m24s (x6 over 9m13s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m8s (x20 over 9m13s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.81s)

                                                
                                    

Test pass (110/167)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 4.43
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 0.9
15 TestDownloadOnly/v1.31.1/binaries 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.54
22 TestOffline 42.98
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 102.8
29 TestAddons/serial/Volcano 38.21
31 TestAddons/serial/GCPAuth/Namespaces 0.12
35 TestAddons/parallel/InspektorGadget 10.43
36 TestAddons/parallel/MetricsServer 6.35
38 TestAddons/parallel/CSI 46.08
39 TestAddons/parallel/Headlamp 14.88
40 TestAddons/parallel/CloudSpanner 5.25
42 TestAddons/parallel/NvidiaDevicePlugin 6.25
43 TestAddons/parallel/Yakd 11.49
44 TestAddons/StoppedEnableDisable 10.73
46 TestCertExpiration 229.39
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 30.22
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 30.54
61 TestFunctional/serial/KubeContext 0.04
62 TestFunctional/serial/KubectlGetPods 0.07
64 TestFunctional/serial/MinikubeKubectlCmd 0.1
65 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
66 TestFunctional/serial/ExtraConfig 33.36
67 TestFunctional/serial/ComponentHealth 0.07
68 TestFunctional/serial/LogsCmd 0.81
69 TestFunctional/serial/LogsFileCmd 0.87
70 TestFunctional/serial/InvalidService 4.49
72 TestFunctional/parallel/ConfigCmd 0.27
73 TestFunctional/parallel/DashboardCmd 7.78
74 TestFunctional/parallel/DryRun 0.16
75 TestFunctional/parallel/InternationalLanguage 0.08
76 TestFunctional/parallel/StatusCmd 0.43
79 TestFunctional/parallel/ProfileCmd/profile_not_create 0.21
80 TestFunctional/parallel/ProfileCmd/profile_list 0.2
81 TestFunctional/parallel/ProfileCmd/profile_json_output 0.2
83 TestFunctional/parallel/ServiceCmd/DeployApp 9.15
84 TestFunctional/parallel/ServiceCmd/List 0.34
85 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
86 TestFunctional/parallel/ServiceCmd/HTTPS 0.15
87 TestFunctional/parallel/ServiceCmd/Format 0.15
88 TestFunctional/parallel/ServiceCmd/URL 0.15
89 TestFunctional/parallel/ServiceCmdConnect 8.3
90 TestFunctional/parallel/AddonsCmd 0.11
91 TestFunctional/parallel/PersistentVolumeClaim 20.64
94 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.26
95 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
97 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.18
98 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
99 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
103 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
106 TestFunctional/parallel/MySQL 20.45
110 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
111 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 13.53
112 TestFunctional/parallel/UpdateContextCmd/no_clusters 14
115 TestFunctional/parallel/NodeLabels 0.06
119 TestFunctional/parallel/Version/short 0.04
120 TestFunctional/parallel/Version/components 0.38
121 TestFunctional/parallel/License 0.24
122 TestFunctional/delete_echo-server_images 0.03
123 TestFunctional/delete_my-image_image 0.01
124 TestFunctional/delete_minikube_cached_images 0.01
129 TestImageBuild/serial/Setup 13.85
130 TestImageBuild/serial/NormalBuild 1.84
131 TestImageBuild/serial/BuildWithBuildArg 0.87
132 TestImageBuild/serial/BuildWithDockerIgnore 0.67
133 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.65
137 TestJSONOutput/start/Command 28.8
138 TestJSONOutput/start/Audit 0
140 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
141 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
143 TestJSONOutput/pause/Command 0.51
144 TestJSONOutput/pause/Audit 0
146 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
147 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
149 TestJSONOutput/unpause/Command 0.42
150 TestJSONOutput/unpause/Audit 0
152 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
155 TestJSONOutput/stop/Command 5.31
156 TestJSONOutput/stop/Audit 0
158 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
160 TestErrorJSONOutput 0.19
165 TestMainNoArgs 0.04
166 TestMinikubeProfile 33.98
174 TestPause/serial/Start 24.64
175 TestPause/serial/SecondStartNoReconfiguration 33.2
176 TestPause/serial/Pause 0.49
177 TestPause/serial/VerifyStatus 0.13
178 TestPause/serial/Unpause 0.4
179 TestPause/serial/PauseAgain 0.55
180 TestPause/serial/DeletePaused 1.71
181 TestPause/serial/VerifyDeletedResources 0.06
195 TestRunningBinaryUpgrade 69.75
197 TestStoppedBinaryUpgrade/Setup 0.36
198 TestStoppedBinaryUpgrade/Upgrade 51.04
199 TestStoppedBinaryUpgrade/MinikubeLogs 0.79
200 TestKubernetesUpgrade 308.08
x
+
TestDownloadOnly/v1.20.0/json-events (4.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (4.43364973s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (4.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (56.442557ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 10:20:23
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 10:20:23.582733   10503 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:20:23.582841   10503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:20:23.582848   10503 out.go:358] Setting ErrFile to fd 2...
	I0930 10:20:23.582852   10503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:20:23.583004   10503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3681/.minikube/bin
	W0930 10:20:23.583112   10503 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19734-3681/.minikube/config/config.json: open /home/jenkins/minikube-integration/19734-3681/.minikube/config/config.json: no such file or directory
	I0930 10:20:23.583662   10503 out.go:352] Setting JSON to true
	I0930 10:20:23.584484   10503 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":172,"bootTime":1727691452,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 10:20:23.584574   10503 start.go:139] virtualization: kvm guest
	I0930 10:20:23.586900   10503 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0930 10:20:23.587002   10503 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19734-3681/.minikube/cache/preloaded-tarball: no such file or directory
	I0930 10:20:23.587010   10503 notify.go:220] Checking for updates...
	I0930 10:20:23.588383   10503 out.go:169] MINIKUBE_LOCATION=19734
	I0930 10:20:23.589694   10503 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:20:23.590892   10503 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19734-3681/kubeconfig
	I0930 10:20:23.592111   10503 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3681/.minikube
	I0930 10:20:23.593372   10503 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (0.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm
--- PASS: TestDownloadOnly/v1.31.1/json-events (0.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
--- PASS: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (57.696014ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 10:20:28
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 10:20:28.302813   10655 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:20:28.302913   10655 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:20:28.302923   10655 out.go:358] Setting ErrFile to fd 2...
	I0930 10:20:28.302928   10655 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:20:28.303092   10655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3681/.minikube/bin
	I0930 10:20:28.303637   10655 out.go:352] Setting JSON to true
	I0930 10:20:28.304546   10655 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":176,"bootTime":1727691452,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 10:20:28.304640   10655 start.go:139] virtualization: kvm guest
	I0930 10:20:28.306705   10655 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0930 10:20:28.306803   10655 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19734-3681/.minikube/cache/preloaded-tarball: no such file or directory
	I0930 10:20:28.306851   10655 notify.go:220] Checking for updates...
	I0930 10:20:28.308288   10655 out.go:169] MINIKUBE_LOCATION=19734
	I0930 10:20:28.309616   10655 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:20:28.310907   10655 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19734-3681/kubeconfig
	I0930 10:20:28.312274   10655 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3681/.minikube
	I0930 10:20:28.313583   10655 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
I0930 10:20:29.694568   10491 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:43761 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (42.98s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (41.390338857s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.593804185s)
--- PASS: TestOffline (42.98s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (46.654196ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (46.78947ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (102.8s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm: (1m42.800931087s)
--- PASS: TestAddons/Setup (102.80s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.21s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:835: volcano-scheduler stabilized in 9.498276ms
addons_test.go:843: volcano-admission stabilized in 9.555648ms
addons_test.go:851: volcano-controller stabilized in 9.605648ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-g8gld" [a11e33b3-22c8-47a7-a079-1d0f700b814d] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003727962s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-fc7hs" [9b505655-f112-4fb9-a427-5ba93966c156] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00372225s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-m424j" [a302861a-7be2-4307-9374-a0f3d536ea78] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003557873s
addons_test.go:870: (dbg) Run:  kubectl --context minikube delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context minikube create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [7c2ef331-209f-48f7-bd87-4b511838e8f2] Pending
helpers_test.go:344: "test-job-nginx-0" [7c2ef331-209f-48f7-bd87-4b511838e8f2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [7c2ef331-209f-48f7-bd87-4b511838e8f2] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.00401917s
addons_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.883176473s)
--- PASS: TestAddons/serial/Volcano (38.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.43s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-gnw75" [c7d79da3-cff7-4698-a9ed-fa9f62568290] Running
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004058848s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube: (5.423408534s)
--- PASS: TestAddons/parallel/InspektorGadget (10.43s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.35s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 1.893498ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-k6tlb" [9dab8e12-be75-43d4-b706-334dbdb7b9c3] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003392494s
addons_test.go:413: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.35s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.08s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
I0930 10:33:05.213188   10491 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0930 10:33:05.217292   10491 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0930 10:33:05.217316   10491 kapi.go:107] duration metric: took 4.137089ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 4.146915ms
addons_test.go:508: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1d42ee69-38a6-4bb8-91ad-3d573398d243] Pending
helpers_test.go:344: "task-pv-pod" [1d42ee69-38a6-4bb8-91ad-3d573398d243] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1d42ee69-38a6-4bb8-91ad-3d573398d243] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003670198s
addons_test.go:528: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context minikube delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0bc26116-8cd0-4b5e-94c6-aeb0325ecf65] Pending
helpers_test.go:344: "task-pv-pod-restore" [0bc26116-8cd0-4b5e-94c6-aeb0325ecf65] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [0bc26116-8cd0-4b5e-94c6-aeb0325ecf65] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003713829s
addons_test.go:570: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context minikube delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context minikube delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.261037691s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (46.08s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-74d9n" [80e714f7-4eff-4c0d-8a3a-db5ce5db04f5] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-74d9n" [80e714f7-4eff-4c0d-8a3a-db5ce5db04f5] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.00386324s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1: (5.381200741s)
--- PASS: TestAddons/parallel/Headlamp (14.88s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-56ngp" [a0c4616f-c83b-4012-8818-c0418293fb15] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003569512s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-6496t" [5671e02b-bae3-433f-98c5-56b427f3e666] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003768836s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p minikube
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-zp446" [2fbb363d-ee8a-4954-92e1-30525995f043] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003885598s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.482153188s)
--- PASS: TestAddons/parallel/Yakd (11.49s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.73s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.442555705s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.73s)

                                                
                                    
x
+
TestCertExpiration (229.39s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (14.575027084s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (33.101401615s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.708708523s)
--- PASS: TestCertExpiration (229.39s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19734-3681/.minikube/files/etc/test/nested/copy/10491/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (30.22s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (30.223635376s)
--- PASS: TestFunctional/serial/StartWithProxy (30.22s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (30.54s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0930 10:39:00.635894   10491 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (30.539572939s)
functional_test.go:663: soft start took 30.540523983s for "minikube" cluster.
I0930 10:39:31.175902   10491 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (30.54s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.36s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.361438036s)
functional_test.go:761: restart took 33.361550815s for "minikube" cluster.
I0930 10:40:04.848747   10491 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (33.36s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.81s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd2937312023/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.87s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.49s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (164.859503ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://10.138.0.48:30634 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context minikube delete -f testdata/invalidsvc.yaml: (1.148413684s)
--- PASS: TestFunctional/serial/InvalidService (4.49s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (42.661088ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (42.565183ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2024/09/30 10:40:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 51398: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.78s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (80.285109ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-3681/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3681/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 10:40:19.171529   51763 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:40:19.171661   51763 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:40:19.171673   51763 out.go:358] Setting ErrFile to fd 2...
	I0930 10:40:19.171680   51763 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:40:19.171871   51763 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3681/.minikube/bin
	I0930 10:40:19.172409   51763 out.go:352] Setting JSON to false
	I0930 10:40:19.173406   51763 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1367,"bootTime":1727691452,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 10:40:19.173502   51763 start.go:139] virtualization: kvm guest
	I0930 10:40:19.175720   51763 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0930 10:40:19.177033   51763 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19734-3681/.minikube/cache/preloaded-tarball: no such file or directory
	I0930 10:40:19.177069   51763 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 10:40:19.177068   51763 notify.go:220] Checking for updates...
	I0930 10:40:19.179916   51763 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:40:19.181169   51763 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-3681/kubeconfig
	I0930 10:40:19.182456   51763 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3681/.minikube
	I0930 10:40:19.183687   51763 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 10:40:19.184886   51763 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 10:40:19.186475   51763 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 10:40:19.186810   51763 exec_runner.go:51] Run: systemctl --version
	I0930 10:40:19.189593   51763 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 10:40:19.202409   51763 out.go:177] * Using the none driver based on existing profile
	I0930 10:40:19.203538   51763 start.go:297] selected driver: none
	I0930 10:40:19.203553   51763 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:40:19.203685   51763 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 10:40:19.203717   51763 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0930 10:40:19.204043   51763 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0930 10:40:19.206014   51763 out.go:201] 
	W0930 10:40:19.207149   51763 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0930 10:40:19.208306   51763 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (79.948767ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-3681/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3681/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 10:40:19.329260   51793 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:40:19.329371   51793 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:40:19.329378   51793 out.go:358] Setting ErrFile to fd 2...
	I0930 10:40:19.329383   51793 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:40:19.329680   51793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-3681/.minikube/bin
	I0930 10:40:19.330226   51793 out.go:352] Setting JSON to false
	I0930 10:40:19.331192   51793 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1367,"bootTime":1727691452,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0930 10:40:19.331332   51793 start.go:139] virtualization: kvm guest
	I0930 10:40:19.333230   51793 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	W0930 10:40:19.334501   51793 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19734-3681/.minikube/cache/preloaded-tarball: no such file or directory
	I0930 10:40:19.334518   51793 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 10:40:19.334548   51793 notify.go:220] Checking for updates...
	I0930 10:40:19.336796   51793 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:40:19.337904   51793 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-3681/kubeconfig
	I0930 10:40:19.339164   51793 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3681/.minikube
	I0930 10:40:19.340412   51793 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0930 10:40:19.341732   51793 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 10:40:19.343301   51793 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 10:40:19.343645   51793 exec_runner.go:51] Run: systemctl --version
	I0930 10:40:19.346598   51793 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 10:40:19.357712   51793 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0930 10:40:19.358945   51793 start.go:297] selected driver: none
	I0930 10:40:19.358964   51793 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:40:19.359074   51793 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 10:40:19.359099   51793 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0930 10:40:19.359426   51793 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0930 10:40:19.361683   51793 out.go:201] 
	W0930 10:40:19.362880   51793 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0930 10:40:19.364040   51793 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "153.234546ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "44.546282ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "157.053745ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "44.749393ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-bf56z" [1e0f2f1b-9705-4513-8ec9-5bcb2431f8d9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-bf56z" [1e0f2f1b-9705-4513-8ec9-5bcb2431f8d9] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003783812s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1494: Took "331.429541ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://10.138.0.48:30392
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://10.138.0.48:30392
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-tt5lb" [7896caed-8d2b-4b9f-b405-924848298ea2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-tt5lb" [7896caed-8d2b-4b9f-b405-924848298ea2] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.002937443s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://10.138.0.48:30971
functional_test.go:1675: http://10.138.0.48:30971: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-tt5lb

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.138.0.48:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.138.0.48:30971
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.30s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (20.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7337ea65-6a87-4794-878a-ae688becf360] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.002791381s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [98dfd14f-4cb6-4e59-add3-b8ae504b1ca2] Pending
helpers_test.go:344: "sp-pod" [98dfd14f-4cb6-4e59-add3-b8ae504b1ca2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [98dfd14f-4cb6-4e59-add3-b8ae504b1ca2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003505331s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5c49cf52-eba0-4ee8-a017-320b72c1943a] Pending
helpers_test.go:344: "sp-pod" [5c49cf52-eba0-4ee8-a017-320b72c1943a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5c49cf52-eba0-4ee8-a017-320b72c1943a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003483014s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (20.64s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 53524: operation not permitted
helpers_test.go:508: unable to kill pid 53477: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context minikube apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [d5cb16b6-f238-4810-a640-1a138629d946] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [d5cb16b6-f238-4810-a640-1a138629d946] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.00420767s
I0930 10:41:09.192916   10491 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context minikube get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.96.88 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-dddms" [2ff04f3a-7896-4b95-a0ca-faab1429321f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-dddms" [2ff04f3a-7896-4b95-a0ca-faab1429321f] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.003979033s
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-dddms -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-dddms -- mysql -ppassword -e "show databases;": exit status 1 (214.77301ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0930 10:41:26.777638   10491 retry.go:31] will retry after 948.342326ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-dddms -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-dddms -- mysql -ppassword -e "show databases;": exit status 1 (106.738393ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0930 10:41:27.833658   10491 retry.go:31] will retry after 1.909451181s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-dddms -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.45s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (13.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.532981351s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (13.53s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.999810209s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (14.00s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.24s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (13.85s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (13.848427356s)
--- PASS: TestImageBuild/serial/Setup (13.85s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.84s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube: (1.838978178s)
--- PASS: TestImageBuild/serial/NormalBuild (1.84s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.87s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.87s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.67s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.67s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.65s)

                                                
                                    
x
+
TestJSONOutput/start/Command (28.8s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (28.801970489s)
--- PASS: TestJSONOutput/start/Command (28.80s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.51s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.51s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.42s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.42s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.31s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (5.310771742s)
--- PASS: TestJSONOutput/stop/Command (5.31s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (62.46487ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6aa83bb8-fb67-4a23-8bed-b77cf57e1fd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d171009f-7bb9-4ad3-a30f-657e0da6a3ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19734"}}
	{"specversion":"1.0","id":"5952d7a8-fb3e-4e19-a522-091d00bb8831","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d62f3908-7367-4596-9d35-ca949c39af81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19734-3681/kubeconfig"}}
	{"specversion":"1.0","id":"1699b676-d26b-41af-807f-2531b34d3fcb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3681/.minikube"}}
	{"specversion":"1.0","id":"c882fafa-ca5a-4ee5-a36e-c487cb2fd24f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ecbf5ced-9a9c-44e4-b84a-da8acab1ca0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7853ce80-8217-4515-8bc2-8df863c9bb07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (33.98s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.570256412s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (17.50585519s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.254472342s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (33.98s)

                                                
                                    
x
+
TestPause/serial/Start (24.64s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (24.644217808s)
--- PASS: TestPause/serial/Start (24.64s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (33.2s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (33.204282027s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (33.20s)

                                                
                                    
x
+
TestPause/serial/Pause (0.49s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.49s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.13s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (125.732517ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.13s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.4s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.40s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.55s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.55s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.71s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.713289692s)
--- PASS: TestPause/serial/DeletePaused (1.71s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (69.75s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3086974850 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3086974850 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (32.001559459s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (34.312843513s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (2.952218034s)
--- PASS: TestRunningBinaryUpgrade (69.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (51.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3510809044 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3510809044 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (15.280946984s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3510809044 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3510809044 -p minikube stop: (23.730375975s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (12.024641412s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (51.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.79s)

                                                
                                    
x
+
TestKubernetesUpgrade (308.08s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (29.102307008s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (1.291368464s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (70.06764ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m18.237103425s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (65.982976ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-3681/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-3681/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (17.998673094s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.253198485s)
--- PASS: TestKubernetesUpgrade (308.08s)

                                                
                                    

Test skip (56/167)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.31.1/preload-exists 0
14 TestDownloadOnly/v1.31.1/cached-images 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Ingress 0
37 TestAddons/parallel/Olm 0
41 TestAddons/parallel/LocalPath 0
45 TestCertOptions 0
47 TestDockerFlags 0
48 TestForceSystemdFlag 0
49 TestForceSystemdEnv 0
50 TestDockerEnvContainerd 0
51 TestKVMDriverInstallOrUpdate 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
54 TestErrorSpam 0
63 TestFunctional/serial/CacheCmd 0
77 TestFunctional/parallel/MountCmd 0
100 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
101 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
102 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
104 TestFunctional/parallel/SSHCmd 0
105 TestFunctional/parallel/CpCmd 0
107 TestFunctional/parallel/FileSync 0
108 TestFunctional/parallel/CertSync 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/ImageCommands 0
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0
125 TestGvisorAddon 0
126 TestMultiControlPlane 0
134 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
161 TestKicCustomNetwork 0
162 TestKicExistingNetwork 0
163 TestKicCustomSubnet 0
164 TestKicStaticIP 0
167 TestMountStart 0
168 TestMultiNode 0
169 TestNetworkPlugins 0
170 TestNoKubernetes 0
171 TestChangeNoneUser 0
182 TestPreload 0
183 TestScheduledStopWindows 0
184 TestScheduledStopUnix 0
185 TestSkaffold 0
188 TestStartStop/group/old-k8s-version 0.12
189 TestStartStop/group/newest-cni 0.12
190 TestStartStop/group/default-k8s-diff-port 0.12
191 TestStartStop/group/no-preload 0.13
192 TestStartStop/group/disable-driver-mounts 0.12
193 TestStartStop/group/embed-certs 0.12
194 TestInsufficientStorage 0
201 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:194: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:916: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1041: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1717: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1760: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1924: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1955: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:458: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:545: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:292: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2016: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.13s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.12s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard