Test Report: none_Linux 19640

                    
                      e5b440675da001c9bcd97e7df406aef1ef05cbc8:2024-09-13:36202
                    
                

Test fail (1/168)

Order failed test Duration
33 TestAddons/parallel/Registry 71.79
x
+
TestAddons/parallel/Registry (71.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.685588ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-rctqp" [5e62917f-fbaf-47a4-ab23-4c40518c66e2] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003309684s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-tcgsk" [2c97d680-312e-4178-b7a5-ec0b4dacb6a2] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003533863s
addons_test.go:342: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.080478004s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/09/13 23:39:11 [DEBUG] GET http://10.138.0.48:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:43107               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:27 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC | 13 Sep 24 23:27 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 13 Sep 24 23:27 UTC | 13 Sep 24 23:29 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	|         | --addons=helm-tiller                 |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 13 Sep 24 23:29 UTC | 13 Sep 24 23:29 UTC |
	|         | volcano --alsologtostderr -v=1       |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 13 Sep 24 23:39 UTC | 13 Sep 24 23:39 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 13 Sep 24 23:39 UTC | 13 Sep 24 23:39 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 23:27:40
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 23:27:40.042854   17442 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:27:40.043112   17442 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:27:40.043121   17442 out.go:358] Setting ErrFile to fd 2...
	I0913 23:27:40.043125   17442 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:27:40.043329   17442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5268/.minikube/bin
	I0913 23:27:40.043997   17442 out.go:352] Setting JSON to false
	I0913 23:27:40.044804   17442 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":612,"bootTime":1726269448,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 23:27:40.044896   17442 start.go:139] virtualization: kvm guest
	I0913 23:27:40.047207   17442 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0913 23:27:40.048483   17442 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19640-5268/.minikube/cache/preloaded-tarball: no such file or directory
	I0913 23:27:40.048520   17442 notify.go:220] Checking for updates...
	I0913 23:27:40.048544   17442 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 23:27:40.049780   17442 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 23:27:40.051042   17442 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5268/kubeconfig
	I0913 23:27:40.052443   17442 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5268/.minikube
	I0913 23:27:40.053770   17442 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 23:27:40.055051   17442 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 23:27:40.056493   17442 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 23:27:40.065713   17442 out.go:177] * Using the none driver based on user configuration
	I0913 23:27:40.066841   17442 start.go:297] selected driver: none
	I0913 23:27:40.066861   17442 start.go:901] validating driver "none" against <nil>
	I0913 23:27:40.066876   17442 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 23:27:40.066932   17442 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0913 23:27:40.067318   17442 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0913 23:27:40.068029   17442 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 23:27:40.068282   17442 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 23:27:40.068320   17442 cni.go:84] Creating CNI manager for ""
	I0913 23:27:40.068369   17442 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 23:27:40.068379   17442 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 23:27:40.068431   17442 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:27:40.069834   17442 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0913 23:27:40.071136   17442 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/config.json ...
	I0913 23:27:40.071169   17442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/config.json: {Name:mk8b3ce9215b269a8298bd5f636510296ec26782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:40.071292   17442 start.go:360] acquireMachinesLock for minikube: {Name:mkd91394d598fa6821cf9805e599aa6da131df53 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0913 23:27:40.071337   17442 start.go:364] duration metric: took 30.169µs to acquireMachinesLock for "minikube"
	I0913 23:27:40.071353   17442 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 23:27:40.071486   17442 start.go:125] createHost starting for "" (driver="none")
	I0913 23:27:40.072849   17442 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0913 23:27:40.074029   17442 exec_runner.go:51] Run: systemctl --version
	I0913 23:27:40.076863   17442 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0913 23:27:40.076887   17442 client.go:168] LocalClient.Create starting
	I0913 23:27:40.076926   17442 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19640-5268/.minikube/certs/ca.pem
	I0913 23:27:40.076950   17442 main.go:141] libmachine: Decoding PEM data...
	I0913 23:27:40.076963   17442 main.go:141] libmachine: Parsing certificate...
	I0913 23:27:40.076998   17442 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19640-5268/.minikube/certs/cert.pem
	I0913 23:27:40.077023   17442 main.go:141] libmachine: Decoding PEM data...
	I0913 23:27:40.077031   17442 main.go:141] libmachine: Parsing certificate...
	I0913 23:27:40.077313   17442 client.go:171] duration metric: took 419.989µs to LocalClient.Create
	I0913 23:27:40.077341   17442 start.go:167] duration metric: took 480.068µs to libmachine.API.Create "minikube"
	I0913 23:27:40.077346   17442 start.go:293] postStartSetup for "minikube" (driver="none")
	I0913 23:27:40.077392   17442 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 23:27:40.077427   17442 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 23:27:40.085405   17442 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0913 23:27:40.085423   17442 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0913 23:27:40.085431   17442 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0913 23:27:40.087097   17442 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0913 23:27:40.088124   17442 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5268/.minikube/addons for local assets ...
	I0913 23:27:40.088164   17442 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-5268/.minikube/files for local assets ...
	I0913 23:27:40.088186   17442 start.go:296] duration metric: took 10.835277ms for postStartSetup
	I0913 23:27:40.088775   17442 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/config.json ...
	I0913 23:27:40.088900   17442 start.go:128] duration metric: took 17.405109ms to createHost
	I0913 23:27:40.088911   17442 start.go:83] releasing machines lock for "minikube", held for 17.565934ms
	I0913 23:27:40.089278   17442 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0913 23:27:40.089369   17442 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0913 23:27:40.091967   17442 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0913 23:27:40.092008   17442 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 23:27:40.101098   17442 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0913 23:27:40.101140   17442 start.go:495] detecting cgroup driver to use...
	I0913 23:27:40.101173   17442 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0913 23:27:40.101295   17442 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 23:27:40.117356   17442 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0913 23:27:40.126064   17442 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0913 23:27:40.133958   17442 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0913 23:27:40.134002   17442 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0913 23:27:40.143256   17442 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 23:27:40.152284   17442 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0913 23:27:40.160537   17442 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 23:27:40.169471   17442 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 23:27:40.177164   17442 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0913 23:27:40.185289   17442 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0913 23:27:40.193354   17442 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0913 23:27:40.202659   17442 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 23:27:40.209921   17442 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 23:27:40.217004   17442 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0913 23:27:40.430326   17442 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0913 23:27:40.494583   17442 start.go:495] detecting cgroup driver to use...
	I0913 23:27:40.494625   17442 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0913 23:27:40.494724   17442 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 23:27:40.513873   17442 exec_runner.go:51] Run: which cri-dockerd
	I0913 23:27:40.514771   17442 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0913 23:27:40.521993   17442 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0913 23:27:40.522012   17442 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0913 23:27:40.522048   17442 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0913 23:27:40.529615   17442 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0913 23:27:40.529746   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2101396467 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0913 23:27:40.537035   17442 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0913 23:27:40.764653   17442 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0913 23:27:40.982299   17442 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0913 23:27:40.982472   17442 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0913 23:27:40.982488   17442 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0913 23:27:40.982531   17442 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0913 23:27:40.990782   17442 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0913 23:27:40.990911   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2963141349 /etc/docker/daemon.json
	I0913 23:27:40.998434   17442 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0913 23:27:41.234618   17442 exec_runner.go:51] Run: sudo systemctl restart docker
	I0913 23:27:41.526286   17442 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0913 23:27:41.536959   17442 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0913 23:27:41.552568   17442 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 23:27:41.563087   17442 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0913 23:27:41.802535   17442 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0913 23:27:42.022296   17442 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0913 23:27:42.254992   17442 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0913 23:27:42.268385   17442 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 23:27:42.278461   17442 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0913 23:27:42.500889   17442 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0913 23:27:42.567208   17442 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0913 23:27:42.567285   17442 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0913 23:27:42.568620   17442 start.go:563] Will wait 60s for crictl version
	I0913 23:27:42.568667   17442 exec_runner.go:51] Run: which crictl
	I0913 23:27:42.569465   17442 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0913 23:27:42.598370   17442 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0913 23:27:42.598423   17442 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0913 23:27:42.618154   17442 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0913 23:27:42.639258   17442 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0913 23:27:42.639338   17442 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0913 23:27:42.641934   17442 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0913 23:27:42.642988   17442 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 23:27:42.643093   17442 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 23:27:42.643103   17442 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.1 docker true true} ...
	I0913 23:27:42.643187   17442 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0913 23:27:42.643224   17442 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0913 23:27:42.691163   17442 cni.go:84] Creating CNI manager for ""
	I0913 23:27:42.691189   17442 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 23:27:42.691198   17442 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 23:27:42.691217   17442 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 23:27:42.691358   17442 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.138.0.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-2"
	  kubeletExtraArgs:
	    node-ip: 10.138.0.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 23:27:42.691410   17442 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 23:27:42.699298   17442 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0913 23:27:42.699342   17442 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0913 23:27:42.707274   17442 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0913 23:27:42.707274   17442 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0913 23:27:42.707335   17442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5268/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0913 23:27:42.707367   17442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5268/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0913 23:27:42.707306   17442 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0913 23:27:42.707447   17442 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0913 23:27:42.718092   17442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5268/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0913 23:27:42.758021   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3450046635 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0913 23:27:42.764538   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1427354373 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0913 23:27:42.791232   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3399511020 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0913 23:27:42.854756   17442 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 23:27:42.863665   17442 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0913 23:27:42.863684   17442 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0913 23:27:42.863728   17442 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0913 23:27:42.870791   17442 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0913 23:27:42.870961   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube944603868 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0913 23:27:42.878197   17442 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0913 23:27:42.878213   17442 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0913 23:27:42.878241   17442 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0913 23:27:42.885374   17442 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 23:27:42.885482   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1796042159 /lib/systemd/system/kubelet.service
	I0913 23:27:42.892519   17442 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0913 23:27:42.892609   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube593589632 /var/tmp/minikube/kubeadm.yaml.new
	I0913 23:27:42.899693   17442 exec_runner.go:51] Run: grep 10.138.0.48	control-plane.minikube.internal$ /etc/hosts
	I0913 23:27:42.900872   17442 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0913 23:27:43.132492   17442 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0913 23:27:43.147234   17442 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube for IP: 10.138.0.48
	I0913 23:27:43.147257   17442 certs.go:194] generating shared ca certs ...
	I0913 23:27:43.147273   17442 certs.go:226] acquiring lock for ca certs: {Name:mkaa457a5d672eb517f9ab47f243967501ec8a46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:43.147407   17442 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-5268/.minikube/ca.key
	I0913 23:27:43.147446   17442 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-5268/.minikube/proxy-client-ca.key
	I0913 23:27:43.147455   17442 certs.go:256] generating profile certs ...
	I0913 23:27:43.147502   17442 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/client.key
	I0913 23:27:43.147520   17442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/client.crt with IP's: []
	I0913 23:27:43.256320   17442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/client.crt ...
	I0913 23:27:43.256352   17442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/client.crt: {Name:mkaa9242556f51dab016296e8ae9815dcc02adf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:43.256488   17442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/client.key ...
	I0913 23:27:43.256499   17442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/client.key: {Name:mk35908491563b31a3f070a6b55036665c26a19e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:43.256559   17442 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/apiserver.key.35c0634a
	I0913 23:27:43.256573   17442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
	I0913 23:27:43.508037   17442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
	I0913 23:27:43.508065   17442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mk4d1294ccf6a0b4ab4193fa085c9a0d57eb998f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:43.508187   17442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/apiserver.key.35c0634a ...
	I0913 23:27:43.508197   17442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mk3ab7673b84dc98fab10aedb75cee3ee57a3d24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:43.508247   17442 certs.go:381] copying /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/apiserver.crt
	I0913 23:27:43.508388   17442 certs.go:385] copying /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/apiserver.key
	I0913 23:27:43.508447   17442 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/proxy-client.key
	I0913 23:27:43.508461   17442 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0913 23:27:43.639801   17442 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/proxy-client.crt ...
	I0913 23:27:43.639830   17442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/proxy-client.crt: {Name:mk7c623c898122472e947fba33dd326e8ea2112d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:43.639957   17442 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/proxy-client.key ...
	I0913 23:27:43.639968   17442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/proxy-client.key: {Name:mkc79bbfb230eb24f3e64926e49c71bfab00a0fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:43.640117   17442 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5268/.minikube/certs/ca-key.pem (1675 bytes)
	I0913 23:27:43.640148   17442 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5268/.minikube/certs/ca.pem (1078 bytes)
	I0913 23:27:43.640170   17442 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5268/.minikube/certs/cert.pem (1123 bytes)
	I0913 23:27:43.640194   17442 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-5268/.minikube/certs/key.pem (1675 bytes)
	I0913 23:27:43.640796   17442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5268/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 23:27:43.640904   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1411481762 /var/lib/minikube/certs/ca.crt
	I0913 23:27:43.650220   17442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5268/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0913 23:27:43.650334   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2872038555 /var/lib/minikube/certs/ca.key
	I0913 23:27:43.657763   17442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5268/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 23:27:43.657860   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4040138273 /var/lib/minikube/certs/proxy-client-ca.crt
	I0913 23:27:43.665078   17442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5268/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0913 23:27:43.665170   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube743613709 /var/lib/minikube/certs/proxy-client-ca.key
	I0913 23:27:43.672503   17442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0913 23:27:43.672614   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube841865764 /var/lib/minikube/certs/apiserver.crt
	I0913 23:27:43.679833   17442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 23:27:43.679931   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube570108513 /var/lib/minikube/certs/apiserver.key
	I0913 23:27:43.687396   17442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 23:27:43.687610   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube407869226 /var/lib/minikube/certs/proxy-client.crt
	I0913 23:27:43.695364   17442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5268/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 23:27:43.695481   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2207609115 /var/lib/minikube/certs/proxy-client.key
	I0913 23:27:43.703373   17442 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0913 23:27:43.703390   17442 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:27:43.703418   17442 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:27:43.710578   17442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-5268/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 23:27:43.710696   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1558285643 /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:27:43.718576   17442 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 23:27:43.718677   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube564877468 /var/lib/minikube/kubeconfig
	I0913 23:27:43.725997   17442 exec_runner.go:51] Run: openssl version
	I0913 23:27:43.728874   17442 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 23:27:43.736770   17442 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:27:43.737944   17442 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:27:43.737981   17442 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:27:43.740633   17442 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 23:27:43.748077   17442 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 23:27:43.749145   17442 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 23:27:43.749176   17442 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:27:43.749263   17442 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0913 23:27:43.764071   17442 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 23:27:43.771942   17442 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 23:27:43.779598   17442 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0913 23:27:43.797464   17442 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 23:27:43.804972   17442 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 23:27:43.804989   17442 kubeadm.go:157] found existing configuration files:
	
	I0913 23:27:43.805030   17442 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 23:27:43.812356   17442 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 23:27:43.812400   17442 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 23:27:43.819325   17442 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 23:27:43.826625   17442 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 23:27:43.826667   17442 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 23:27:43.833477   17442 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 23:27:43.840625   17442 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 23:27:43.840664   17442 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 23:27:43.847528   17442 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 23:27:43.854259   17442 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 23:27:43.854294   17442 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 23:27:43.860693   17442 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0913 23:27:43.891565   17442 kubeadm.go:310] W0913 23:27:43.891452   18319 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 23:27:43.892159   17442 kubeadm.go:310] W0913 23:27:43.892110   18319 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 23:27:43.893675   17442 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0913 23:27:43.893728   17442 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 23:27:43.992568   17442 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 23:27:43.992608   17442 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 23:27:43.992613   17442 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 23:27:43.992618   17442 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0913 23:27:44.003644   17442 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 23:27:44.007235   17442 out.go:235]   - Generating certificates and keys ...
	I0913 23:27:44.007278   17442 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 23:27:44.007292   17442 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 23:27:44.075670   17442 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0913 23:27:44.303936   17442 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0913 23:27:44.498053   17442 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0913 23:27:44.600385   17442 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0913 23:27:44.743923   17442 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0913 23:27:44.744037   17442 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0913 23:27:45.081650   17442 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0913 23:27:45.081790   17442 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0913 23:27:45.206908   17442 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0913 23:27:45.357294   17442 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0913 23:27:45.545262   17442 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0913 23:27:45.545446   17442 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 23:27:45.610682   17442 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 23:27:46.117404   17442 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0913 23:27:46.441754   17442 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 23:27:46.568612   17442 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 23:27:46.735805   17442 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 23:27:46.736389   17442 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 23:27:46.739735   17442 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 23:27:46.741773   17442 out.go:235]   - Booting up control plane ...
	I0913 23:27:46.741794   17442 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 23:27:46.741813   17442 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 23:27:46.741820   17442 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 23:27:46.762007   17442 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 23:27:46.766143   17442 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 23:27:46.766204   17442 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 23:27:46.998267   17442 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0913 23:27:46.998288   17442 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0913 23:27:47.499821   17442 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.52557ms
	I0913 23:27:47.499844   17442 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0913 23:27:51.501297   17442 kubeadm.go:310] [api-check] The API server is healthy after 4.001474093s
	I0913 23:27:51.511992   17442 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 23:27:51.520723   17442 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 23:27:51.536233   17442 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 23:27:51.536255   17442 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 23:27:51.542357   17442 kubeadm.go:310] [bootstrap-token] Using token: k48vab.7dyg6bcsudjwhanl
	I0913 23:27:51.543547   17442 out.go:235]   - Configuring RBAC rules ...
	I0913 23:27:51.543573   17442 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 23:27:51.546266   17442 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 23:27:51.551401   17442 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 23:27:51.553684   17442 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 23:27:51.555858   17442 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 23:27:51.557926   17442 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 23:27:51.908217   17442 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 23:27:52.330049   17442 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 23:27:52.907307   17442 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 23:27:52.908199   17442 kubeadm.go:310] 
	I0913 23:27:52.908211   17442 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 23:27:52.908215   17442 kubeadm.go:310] 
	I0913 23:27:52.908219   17442 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 23:27:52.908223   17442 kubeadm.go:310] 
	I0913 23:27:52.908228   17442 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 23:27:52.908232   17442 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 23:27:52.908236   17442 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 23:27:52.908239   17442 kubeadm.go:310] 
	I0913 23:27:52.908242   17442 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 23:27:52.908246   17442 kubeadm.go:310] 
	I0913 23:27:52.908249   17442 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 23:27:52.908253   17442 kubeadm.go:310] 
	I0913 23:27:52.908256   17442 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 23:27:52.908260   17442 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 23:27:52.908264   17442 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 23:27:52.908273   17442 kubeadm.go:310] 
	I0913 23:27:52.908277   17442 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 23:27:52.908281   17442 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 23:27:52.908284   17442 kubeadm.go:310] 
	I0913 23:27:52.908288   17442 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token k48vab.7dyg6bcsudjwhanl \
	I0913 23:27:52.908292   17442 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a1f628722583726a6fb852629b3c574616ae97686c64770f04122966ec038809 \
	I0913 23:27:52.908296   17442 kubeadm.go:310] 	--control-plane 
	I0913 23:27:52.908300   17442 kubeadm.go:310] 
	I0913 23:27:52.908304   17442 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 23:27:52.908307   17442 kubeadm.go:310] 
	I0913 23:27:52.908311   17442 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token k48vab.7dyg6bcsudjwhanl \
	I0913 23:27:52.908316   17442 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a1f628722583726a6fb852629b3c574616ae97686c64770f04122966ec038809 
	I0913 23:27:52.911005   17442 cni.go:84] Creating CNI manager for ""
	I0913 23:27:52.911037   17442 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 23:27:52.912748   17442 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 23:27:52.913883   17442 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0913 23:27:52.923598   17442 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 23:27:52.923732   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube874528224 /etc/cni/net.d/1-k8s.conflist
	I0913 23:27:52.932495   17442 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 23:27:52.932567   17442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:52.932615   17442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_09_13T23_27_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0913 23:27:52.941587   17442 ops.go:34] apiserver oom_adj: -16
	I0913 23:27:53.009149   17442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:53.510027   17442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:54.009720   17442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:54.509811   17442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:55.009502   17442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:55.510251   17442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:56.010033   17442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:56.510046   17442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:57.009839   17442 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:57.071893   17442 kubeadm.go:1113] duration metric: took 4.139377916s to wait for elevateKubeSystemPrivileges
	I0913 23:27:57.071932   17442 kubeadm.go:394] duration metric: took 13.322756725s to StartCluster
	I0913 23:27:57.071956   17442 settings.go:142] acquiring lock: {Name:mk426e1dc1c608a4bb98bf9e7b79b1ed45a6c0d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:57.072030   17442 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-5268/kubeconfig
	I0913 23:27:57.072657   17442 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-5268/kubeconfig: {Name:mk1b3a366c7b53fc601c2b9d45908b04288a17e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:57.072898   17442 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0913 23:27:57.072886   17442 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0913 23:27:57.073016   17442 addons.go:69] Setting yakd=true in profile "minikube"
	I0913 23:27:57.073024   17442 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0913 23:27:57.073036   17442 addons.go:234] Setting addon yakd=true in "minikube"
	I0913 23:27:57.073045   17442 mustload.go:65] Loading cluster: minikube
	I0913 23:27:57.073070   17442 host.go:66] Checking if "minikube" exists ...
	I0913 23:27:57.073087   17442 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0913 23:27:57.073081   17442 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0913 23:27:57.073105   17442 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0913 23:27:57.073120   17442 addons.go:69] Setting registry=true in profile "minikube"
	I0913 23:27:57.073123   17442 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 23:27:57.073130   17442 addons.go:234] Setting addon registry=true in "minikube"
	I0913 23:27:57.073163   17442 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0913 23:27:57.073168   17442 host.go:66] Checking if "minikube" exists ...
	I0913 23:27:57.073179   17442 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0913 23:27:57.073248   17442 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 23:27:57.073640   17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0913 23:27:57.073657   17442 api_server.go:166] Checking apiserver status ...
	I0913 23:27:57.073682   17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0913 23:27:57.073697   17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:27:57.073708   17442 addons.go:69] Setting helm-tiller=true in profile "minikube"
	I0913 23:27:57.073710   17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0913 23:27:57.073719   17442 addons.go:234] Setting addon helm-tiller=true in "minikube"
	I0913 23:27:57.073725   17442 api_server.go:166] Checking apiserver status ...
	I0913 23:27:57.073736   17442 host.go:66] Checking if "minikube" exists ...
	I0913 23:27:57.073740   17442 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0913 23:27:57.073761   17442 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0913 23:27:57.073766   17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:27:57.073774   17442 addons.go:69] Setting volcano=true in profile "minikube"
	I0913 23:27:57.073786   17442 host.go:66] Checking if "minikube" exists ...
	I0913 23:27:57.073792   17442 addons.go:234] Setting addon volcano=true in "minikube"
	I0913 23:27:57.073815   17442 host.go:66] Checking if "minikube" exists ...
	I0913 23:27:57.074243   17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0913 23:27:57.074262   17442 api_server.go:166] Checking apiserver status ...
	I0913 23:27:57.074294   17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:27:57.073699   17442 api_server.go:166] Checking apiserver status ...
	I0913 23:27:57.074365   17442 out.go:177] * Configuring local host environment ...
	I0913 23:27:57.073762   17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0913 23:27:57.074415   17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:27:57.074421   17442 api_server.go:166] Checking apiserver status ...
	I0913 23:27:57.074451   17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:27:57.074502   17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0913 23:27:57.074515   17442 api_server.go:166] Checking apiserver status ...
	I0913 23:27:57.074545   17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:27:57.073107   17442 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0913 23:27:57.074684   17442 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0913 23:27:57.074679   17442 host.go:66] Checking if "minikube" exists ...
	I0913 23:27:57.074702   17442 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0913 23:27:57.074726   17442 host.go:66] Checking if "minikube" exists ...
	I0913 23:27:57.075333   17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0913 23:27:57.075340   17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0913 23:27:57.075348   17442 api_server.go:166] Checking apiserver status ...
	I0913 23:27:57.075353   17442 api_server.go:166] Checking apiserver status ...
	I0913 23:27:57.075376   17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:27:57.075383   17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:27:57.074363   17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0913 23:27:57.075944   17442 api_server.go:166] Checking apiserver status ...
	I0913 23:27:57.075977   17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0913 23:27:57.076988   17442 out.go:270] * 
	W0913 23:27:57.077012   17442 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0913 23:27:57.077020   17442 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0913 23:27:57.077027   17442 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0913 23:27:57.077074   17442 out.go:270] * 
	W0913 23:27:57.077137   17442 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0913 23:27:57.077192   17442 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0913 23:27:57.077220   17442 out.go:270] * 
	W0913 23:27:57.077267   17442 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0913 23:27:57.077306   17442 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0913 23:27:57.077337   17442 out.go:270] * 
	W0913 23:27:57.077364   17442 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0913 23:27:57.077419   17442 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 23:27:57.073727   17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0913 23:27:57.077493   17442 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0913 23:27:57.077533   17442 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0913 23:27:57.077566   17442 host.go:66] Checking if "minikube" exists ...
	I0913 23:27:57.074390   17442 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0913 23:27:57.077750   17442 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0913 23:27:57.077782   17442 host.go:66] Checking if "minikube" exists ...
	I0913 23:27:57.078238   17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0913 23:27:57.078283   17442 api_server.go:166] Checking apiserver status ...
	I0913 23:27:57.078307   17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0913 23:27:57.078326   17442 api_server.go:166] Checking apiserver status ...
	I0913 23:27:57.078334   17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:27:57.078363   17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:27:57.074379   17442 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0913 23:27:57.078551   17442 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0913 23:27:57.078583   17442 host.go:66] Checking if "minikube" exists ...
	I0913 23:27:57.078516   17442 api_server.go:166] Checking apiserver status ...
	I0913 23:27:57.078731   17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:27:57.073063   17442 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0913 23:27:57.078814   17442 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0913 23:27:57.078858   17442 host.go:66] Checking if "minikube" exists ...
	I0913 23:27:57.079164   17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0913 23:27:57.079188   17442 api_server.go:166] Checking apiserver status ...
	I0913 23:27:57.079219   17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:27:57.079487   17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0913 23:27:57.079540   17442 api_server.go:166] Checking apiserver status ...
	I0913 23:27:57.079581   17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:27:57.080453   17442 out.go:177] * Verifying Kubernetes components...
	I0913 23:27:57.084435   17442 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0913 23:27:57.097582   17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
	I0913 23:27:57.098468   17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
	I0913 23:27:57.098508   17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
	I0913 23:27:57.098470   17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
	I0913 23:27:57.102971   17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
	I0913 23:27:57.103263   17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
	I0913 23:27:57.106329   17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
	I0913 23:27:57.110192   17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
	I0913 23:27:57.113567   17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
	I0913 23:27:57.116966   17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
	I0913 23:27:57.116978   17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
	I0913 23:27:57.117726   17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
	I0913 23:27:57.119309   17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
	I0913 23:27:57.125740   17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
	I0913 23:27:57.125801   17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
	I0913 23:27:57.129543   17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
	I0913 23:27:57.129605   17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
	I0913 23:27:57.131088   17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
	I0913 23:27:57.131135   17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
	I0913 23:27:57.132646   17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
	I0913 23:27:57.132690   17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
	I0913 23:27:57.134792   17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
	I0913 23:27:57.134845   17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
	I0913 23:27:57.135471   17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
	I0913 23:27:57.135525   17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
	I0913 23:27:57.135731   17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
	I0913 23:27:57.135928   17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
	I0913 23:27:57.137670   17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
	I0913 23:27:57.137722   17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
	I0913 23:27:57.138977   17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
	I0913 23:27:57.139024   17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
	I0913 23:27:57.139279   17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
	I0913 23:27:57.139322   17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
	I0913 23:27:57.142607   17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
	I0913 23:27:57.144979   17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
	I0913 23:27:57.145025   17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
	I0913 23:27:57.149168   17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
	I0913 23:27:57.149234   17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
	I0913 23:27:57.157479   17442 api_server.go:204] freezer state: "THAWED"
	I0913 23:27:57.157505   17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0913 23:27:57.158036   17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
	I0913 23:27:57.158162   17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
	I0913 23:27:57.158858   17442 api_server.go:204] freezer state: "THAWED"
	I0913 23:27:57.158879   17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0913 23:27:57.161849   17442 api_server.go:204] freezer state: "THAWED"
	I0913 23:27:57.161922   17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0913 23:27:57.166641   17442 api_server.go:204] freezer state: "THAWED"
	I0913 23:27:57.166669   17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0913 23:27:57.170179   17442 api_server.go:204] freezer state: "THAWED"
	I0913 23:27:57.170211   17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0913 23:27:57.170249   17442 api_server.go:204] freezer state: "THAWED"
	I0913 23:27:57.170261   17442 api_server.go:204] freezer state: "THAWED"
	I0913 23:27:57.170274   17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0913 23:27:57.170286   17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0913 23:27:57.170640   17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0913 23:27:57.170847   17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0913 23:27:57.171745   17442 api_server.go:204] freezer state: "THAWED"
	I0913 23:27:57.171779   17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0913 23:27:57.172294   17442 api_server.go:204] freezer state: "THAWED"
	I0913 23:27:57.172318   17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0913 23:27:57.172610   17442 out.go:177]   - Using image docker.io/registry:2.8.3
	I0913 23:27:57.172772   17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
	I0913 23:27:57.172829   17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
	I0913 23:27:57.172776   17442 api_server.go:204] freezer state: "THAWED"
	I0913 23:27:57.172984   17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0913 23:27:57.173122   17442 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0913 23:27:57.173162   17442 host.go:66] Checking if "minikube" exists ...
	I0913 23:27:57.173854   17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0913 23:27:57.175651   17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0913 23:27:57.175669   17442 api_server.go:166] Checking apiserver status ...
	I0913 23:27:57.175693   17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:27:57.176543   17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0913 23:27:57.177208   17442 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0913 23:27:57.177551   17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0913 23:27:57.178541   17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0913 23:27:57.177624   17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0913 23:27:57.179438   17442 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0913 23:27:57.179479   17442 host.go:66] Checking if "minikube" exists ...
	I0913 23:27:57.179510   17442 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0913 23:27:57.179616   17442 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0913 23:27:57.179643   17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0913 23:27:57.179821   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1248433547 /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0913 23:27:57.179823   17442 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0913 23:27:57.180048   17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0913 23:27:57.180058   17442 api_server.go:166] Checking apiserver status ...
	I0913 23:27:57.180088   17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:27:57.180243   17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0913 23:27:57.180260   17442 host.go:66] Checking if "minikube" exists ...
	I0913 23:27:57.180777   17442 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 23:27:57.180814   17442 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 23:27:57.180871   17442 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0913 23:27:57.180894   17442 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0913 23:27:57.180937   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube32070201 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 23:27:57.181021   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube775660223 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0913 23:27:57.181544   17442 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0913 23:27:57.181571   17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0913 23:27:57.181882   17442 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 23:27:57.182192   17442 api_server.go:204] freezer state: "THAWED"
	I0913 23:27:57.182542   17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0913 23:27:57.182856   17442 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0913 23:27:57.182933   17442 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0913 23:27:57.183267   17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0913 23:27:57.183371   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2192420989 /etc/kubernetes/addons/registry-rc.yaml
	I0913 23:27:57.183695   17442 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 23:27:57.183708   17442 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0913 23:27:57.183714   17442 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 23:27:57.183745   17442 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 23:27:57.184237   17442 api_server.go:204] freezer state: "THAWED"
	I0913 23:27:57.184252   17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0913 23:27:57.185429   17442 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0913 23:27:57.186716   17442 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0913 23:27:57.188209   17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0913 23:27:57.188435   17442 api_server.go:204] freezer state: "THAWED"
	I0913 23:27:57.188455   17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0913 23:27:57.194904   17442 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0913 23:27:57.195461   17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0913 23:27:57.195641   17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0913 23:27:57.195935   17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0913 23:27:57.195952   17442 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0913 23:27:57.195976   17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0913 23:27:57.196536   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4024711363 /etc/kubernetes/addons/volcano-deployment.yaml
	I0913 23:27:57.197487   17442 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0913 23:27:57.197545   17442 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0913 23:27:57.200999   17442 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 23:27:57.197647   17442 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 23:27:57.202252   17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0913 23:27:57.202412   17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0913 23:27:57.203221   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3776574104 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 23:27:57.203450   17442 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0913 23:27:57.203601   17442 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0913 23:27:57.203625   17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0913 23:27:57.203720   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2775994425 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 23:27:57.203724   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube186619044 /etc/kubernetes/addons/deployment.yaml
	I0913 23:27:57.203940   17442 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0913 23:27:57.203959   17442 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0913 23:27:57.204060   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1844513379 /etc/kubernetes/addons/registry-svc.yaml
	I0913 23:27:57.204688   17442 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0913 23:27:57.204778   17442 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0913 23:27:57.205023   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3076031724 /etc/kubernetes/addons/yakd-ns.yaml
	I0913 23:27:57.205215   17442 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0913 23:27:57.205241   17442 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0913 23:27:57.205725   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2279727050 /etc/kubernetes/addons/ig-namespace.yaml
	I0913 23:27:57.205734   17442 api_server.go:204] freezer state: "THAWED"
	I0913 23:27:57.205755   17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0913 23:27:57.206692   17442 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0913 23:27:57.206712   17442 exec_runner.go:151] cp: helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0913 23:27:57.206810   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3991240121 /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0913 23:27:57.214805   17442 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0913 23:27:57.214832   17442 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0913 23:27:57.214947   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube929751967 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0913 23:27:57.215418   17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0913 23:27:57.217130   17442 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0913 23:27:57.219708   17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 23:27:57.220771   17442 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 23:27:57.221166   17442 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0913 23:27:57.221433   17442 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 23:27:57.222045   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube123773802 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 23:27:57.222236   17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
	I0913 23:27:57.225914   17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0913 23:27:57.226260   17442 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0913 23:27:57.227601   17442 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0913 23:27:57.228900   17442 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0913 23:27:57.230708   17442 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0913 23:27:57.231721   17442 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0913 23:27:57.234090   17442 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0913 23:27:57.236322   17442 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0913 23:27:57.236349   17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0913 23:27:57.236517   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2985886460 /etc/kubernetes/addons/registry-proxy.yaml
	I0913 23:27:57.236907   17442 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0913 23:27:57.236974   17442 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0913 23:27:57.237163   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1771172411 /etc/kubernetes/addons/yakd-sa.yaml
	I0913 23:27:57.237926   17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 23:27:57.237949   17442 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0913 23:27:57.237973   17442 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0913 23:27:57.238090   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1626312397 /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 23:27:57.238101   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1327141144 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0913 23:27:57.238816   17442 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0913 23:27:57.238842   17442 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0913 23:27:57.238947   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4135178880 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0913 23:27:57.240633   17442 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0913 23:27:57.242138   17442 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0913 23:27:57.242161   17442 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0913 23:27:57.242283   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3184474248 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0913 23:27:57.243575   17442 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 23:27:57.243598   17442 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 23:27:57.243601   17442 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0913 23:27:57.243619   17442 exec_runner.go:151] cp: helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0913 23:27:57.243744   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2414221854 /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0913 23:27:57.243992   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3262758196 /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 23:27:57.245951   17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
	I0913 23:27:57.246357   17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0913 23:27:57.246458   17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
	I0913 23:27:57.246504   17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
	I0913 23:27:57.253668   17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 23:27:57.254203   17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0913 23:27:57.255119   17442 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0913 23:27:57.260937   17442 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0913 23:27:57.260990   17442 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0913 23:27:57.261006   17442 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0913 23:27:57.261116   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube751495581 /etc/kubernetes/addons/yakd-crb.yaml
	I0913 23:27:57.261116   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube229584318 /etc/kubernetes/addons/ig-role.yaml
	I0913 23:27:57.267971   17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0913 23:27:57.268434   17442 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0913 23:27:57.268457   17442 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0913 23:27:57.268560   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3705089436 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0913 23:27:57.268693   17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
	I0913 23:27:57.268724   17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
	I0913 23:27:57.275120   17442 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0913 23:27:57.275150   17442 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0913 23:27:57.275379   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube766263024 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0913 23:27:57.281748   17442 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0913 23:27:57.281783   17442 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0913 23:27:57.281906   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3060564583 /etc/kubernetes/addons/yakd-svc.yaml
	I0913 23:27:57.286437   17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 23:27:57.290240   17442 api_server.go:204] freezer state: "THAWED"
	I0913 23:27:57.290270   17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0913 23:27:57.295227   17442 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0913 23:27:57.295258   17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0913 23:27:57.295402   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube229927536 /etc/kubernetes/addons/yakd-dp.yaml
	I0913 23:27:57.296224   17442 api_server.go:204] freezer state: "THAWED"
	I0913 23:27:57.296249   17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0913 23:27:57.300333   17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0913 23:27:57.300377   17442 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 23:27:57.300405   17442 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0913 23:27:57.300412   17442 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0913 23:27:57.300458   17442 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0913 23:27:57.301410   17442 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0913 23:27:57.301441   17442 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0913 23:27:57.301574   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube76485290 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0913 23:27:57.304376   17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0913 23:27:57.306988   17442 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0913 23:27:57.308074   17442 out.go:177]   - Using image docker.io/busybox:stable
	I0913 23:27:57.309443   17442 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 23:27:57.309474   17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0913 23:27:57.309664   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1904369519 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 23:27:57.317292   17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0913 23:27:57.324054   17442 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0913 23:27:57.324081   17442 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0913 23:27:57.324206   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3194742819 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0913 23:27:57.325912   17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 23:27:57.326591   17442 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0913 23:27:57.326613   17442 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0913 23:27:57.326716   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2293852135 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0913 23:27:57.327346   17442 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0913 23:27:57.327373   17442 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0913 23:27:57.327486   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3986840925 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0913 23:27:57.338418   17442 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 23:27:57.338449   17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0913 23:27:57.338566   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1371611737 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 23:27:57.338795   17442 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0913 23:27:57.338818   17442 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0913 23:27:57.338908   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube613785979 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0913 23:27:57.351014   17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 23:27:57.359395   17442 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0913 23:27:57.359425   17442 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0913 23:27:57.359540   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2815131364 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0913 23:27:57.360319   17442 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 23:27:57.360444   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube639427477 /etc/kubernetes/addons/storageclass.yaml
	I0913 23:27:57.388178   17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 23:27:57.412399   17442 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0913 23:27:57.412434   17442 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0913 23:27:57.412572   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3973050986 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0913 23:27:57.428040   17442 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0913 23:27:57.428077   17442 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0913 23:27:57.428197   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3028262753 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0913 23:27:57.434296   17442 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0913 23:27:57.434323   17442 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0913 23:27:57.434449   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube905477838 /etc/kubernetes/addons/ig-crd.yaml
	I0913 23:27:57.464006   17442 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0913 23:27:57.464044   17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0913 23:27:57.464220   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube544029514 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0913 23:27:57.502822   17442 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0913 23:27:57.502822   17442 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0913 23:27:57.502957   17442 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0913 23:27:57.503108   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1704774810 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0913 23:27:57.516500   17442 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 23:27:57.516533   17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0913 23:27:57.518903   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3893659286 /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 23:27:57.566647   17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 23:27:57.588510   17442 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
	I0913 23:27:57.590959   17442 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
	I0913 23:27:57.590982   17442 node_ready.go:38] duration metric: took 2.436908ms for node "ubuntu-20-agent-2" to be "Ready" ...
	I0913 23:27:57.590994   17442 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 23:27:57.599038   17442 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0913 23:27:57.599075   17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0913 23:27:57.599222   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube924088179 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0913 23:27:57.604688   17442 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0913 23:27:57.646801   17442 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0913 23:27:57.646835   17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0913 23:27:57.646993   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2813958811 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0913 23:27:57.708518   17442 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0913 23:27:57.738945   17442 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 23:27:57.738992   17442 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0913 23:27:57.739141   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3590475104 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 23:27:57.865690   17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 23:27:58.177489   17442 addons.go:475] Verifying addon registry=true in "minikube"
	I0913 23:27:58.180638   17442 out.go:177] * Verifying registry addon...
	I0913 23:27:58.183465   17442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0913 23:27:58.187246   17442 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0913 23:27:58.187267   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:58.208909   17442 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0913 23:27:58.222182   17442 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0913 23:27:58.418624   17442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.163562492s)
	I0913 23:27:58.448000   17442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.161507323s)
	I0913 23:27:58.448035   17442 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0913 23:27:58.665832   17442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.339874126s)
	I0913 23:27:58.689887   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:58.706577   17442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.139796241s)
	I0913 23:27:59.195648   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:59.213372   17442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.862281521s)
	W0913 23:27:59.213487   17442 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 23:27:59.213685   17442 retry.go:31] will retry after 143.241383ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 23:27:59.359243   17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 23:27:59.618398   17442 pod_ready.go:103] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
	I0913 23:27:59.692502   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:00.200887   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:00.258799   17442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.032845056s)
	I0913 23:28:00.590595   17442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.724842158s)
	I0913 23:28:00.590640   17442 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0913 23:28:00.606332   17442 out.go:177] * Verifying csi-hostpath-driver addon...
	I0913 23:28:00.609194   17442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0913 23:28:00.613287   17442 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0913 23:28:00.613311   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:00.714064   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:01.114348   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:01.217763   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:01.613812   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:01.686868   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:02.110017   17442 pod_ready.go:103] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:02.112949   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:02.213113   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:02.553505   17442 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.194195078s)
	I0913 23:28:02.614225   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:02.687289   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:03.113274   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:03.186707   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:03.613258   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:03.687268   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:04.110441   17442 pod_ready.go:103] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:04.113154   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:04.195685   17442 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0913 23:28:04.195864   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube804519332 /var/lib/minikube/google_application_credentials.json
	I0913 23:28:04.206536   17442 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0913 23:28:04.206659   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2477774900 /var/lib/minikube/google_cloud_project
	I0913 23:28:04.215433   17442 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0913 23:28:04.215480   17442 host.go:66] Checking if "minikube" exists ...
	I0913 23:28:04.216022   17442 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0913 23:28:04.216041   17442 api_server.go:166] Checking apiserver status ...
	I0913 23:28:04.216075   17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:28:04.232537   17442 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/18732/cgroup
	I0913 23:28:04.241371   17442 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725"
	I0913 23:28:04.241423   17442 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/3250f8e27aaa117976dcac01b71fe9a9c75c5977e74e557c7e7add452ad56725/freezer.state
	I0913 23:28:04.249675   17442 api_server.go:204] freezer state: "THAWED"
	I0913 23:28:04.249700   17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0913 23:28:04.317704   17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0913 23:28:04.317780   17442 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0913 23:28:04.318829   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:04.393952   17442 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 23:28:04.471556   17442 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0913 23:28:04.534702   17442 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0913 23:28:04.534771   17442 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0913 23:28:04.534919   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4106205132 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0913 23:28:04.544587   17442 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0913 23:28:04.544616   17442 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0913 23:28:04.544719   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4264924740 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0913 23:28:04.553557   17442 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 23:28:04.553586   17442 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0913 23:28:04.553700   17442 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2302772220 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 23:28:04.562301   17442 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 23:28:04.612989   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:04.687352   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:04.935042   17442 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0913 23:28:04.937730   17442 out.go:177] * Verifying gcp-auth addon...
	I0913 23:28:04.940478   17442 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0913 23:28:04.943465   17442 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0913 23:28:05.113111   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:05.187234   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:05.609838   17442 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:05.609862   17442 pod_ready.go:82] duration metric: took 8.005138408s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:05.609873   17442 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:05.613140   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:05.614409   17442 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:05.614426   17442 pod_ready.go:82] duration metric: took 4.545618ms for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:05.614434   17442 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:05.618191   17442 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:05.618210   17442 pod_ready.go:82] duration metric: took 3.769259ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:05.618221   17442 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ccmtg" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:05.621999   17442 pod_ready.go:93] pod "kube-proxy-ccmtg" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:05.622016   17442 pod_ready.go:82] duration metric: took 3.788187ms for pod "kube-proxy-ccmtg" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:05.622027   17442 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:05.626105   17442 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:05.626121   17442 pod_ready.go:82] duration metric: took 4.086761ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:05.626132   17442 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-v6s2b" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:05.713292   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:06.113698   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:06.186966   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:06.409150   17442 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-v6s2b" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:06.409179   17442 pod_ready.go:82] duration metric: took 783.038714ms for pod "nvidia-device-plugin-daemonset-v6s2b" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:06.409188   17442 pod_ready.go:39] duration metric: took 8.818182714s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 23:28:06.409213   17442 api_server.go:52] waiting for apiserver process to appear ...
	I0913 23:28:06.409281   17442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:28:06.429351   17442 api_server.go:72] duration metric: took 9.351869727s to wait for apiserver process to appear ...
	I0913 23:28:06.429379   17442 api_server.go:88] waiting for apiserver healthz status ...
	I0913 23:28:06.429402   17442 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0913 23:28:06.433500   17442 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0913 23:28:06.434460   17442 api_server.go:141] control plane version: v1.31.1
	I0913 23:28:06.434485   17442 api_server.go:131] duration metric: took 5.098058ms to wait for apiserver health ...
	I0913 23:28:06.434506   17442 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 23:28:06.613454   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:06.615437   17442 system_pods.go:59] 17 kube-system pods found
	I0913 23:28:06.615461   17442 system_pods.go:61] "coredns-7c65d6cfc9-khsrk" [f4523de2-fd1a-4429-8bb0-593cdcebc8d3] Running
	I0913 23:28:06.615469   17442 system_pods.go:61] "csi-hostpath-attacher-0" [23867dcc-737c-47e4-b93d-ae6177be3088] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 23:28:06.615474   17442 system_pods.go:61] "csi-hostpath-resizer-0" [9bacd521-508a-4d14-be54-d8ef696790e6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 23:28:06.615484   17442 system_pods.go:61] "csi-hostpathplugin-qh7d2" [d24f9989-c47a-4568-81d6-b463704b2bb1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 23:28:06.615488   17442 system_pods.go:61] "etcd-ubuntu-20-agent-2" [6ca55116-9e5f-4b31-a5d1-22ca33473230] Running
	I0913 23:28:06.615492   17442 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [11e78520-bf53-4063-b809-908d3527afdf] Running
	I0913 23:28:06.615495   17442 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [adc882dd-b519-4eea-a197-4f596b408017] Running
	I0913 23:28:06.615498   17442 system_pods.go:61] "kube-proxy-ccmtg" [73fdfb4e-c926-4ccd-b35e-5df2208acfb3] Running
	I0913 23:28:06.615501   17442 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [56c0780d-d761-48fe-a15f-1b1113e80709] Running
	I0913 23:28:06.615505   17442 system_pods.go:61] "metrics-server-84c5f94fbc-trf62" [f9b8c6b4-aeff-4750-9bb3-7e3953c5258c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 23:28:06.615510   17442 system_pods.go:61] "nvidia-device-plugin-daemonset-v6s2b" [44534f93-107d-44e4-a5f2-7fd26e600251] Running
	I0913 23:28:06.615515   17442 system_pods.go:61] "registry-66c9cd494c-rctqp" [5e62917f-fbaf-47a4-ab23-4c40518c66e2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0913 23:28:06.615520   17442 system_pods.go:61] "registry-proxy-tcgsk" [2c97d680-312e-4178-b7a5-ec0b4dacb6a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0913 23:28:06.615532   17442 system_pods.go:61] "snapshot-controller-56fcc65765-2ch4n" [7ff7b1c1-f704-4f2d-b60a-a94b57b31f28] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 23:28:06.615540   17442 system_pods.go:61] "snapshot-controller-56fcc65765-zkp5d" [a54db17e-a825-43cb-99e5-5344c179faa4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 23:28:06.615546   17442 system_pods.go:61] "storage-provisioner" [26a3ab50-0a6c-4ffd-a198-a6437b518845] Running
	I0913 23:28:06.615554   17442 system_pods.go:61] "tiller-deploy-b48cc5f79-4688p" [baecb9b8-e7ad-4c4a-a256-fb125074d61b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0913 23:28:06.615561   17442 system_pods.go:74] duration metric: took 181.045208ms to wait for pod list to return data ...
	I0913 23:28:06.615570   17442 default_sa.go:34] waiting for default service account to be created ...
	I0913 23:28:06.687543   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:06.809286   17442 default_sa.go:45] found service account: "default"
	I0913 23:28:06.809312   17442 default_sa.go:55] duration metric: took 193.734262ms for default service account to be created ...
	I0913 23:28:06.809322   17442 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 23:28:07.014188   17442 system_pods.go:86] 17 kube-system pods found
	I0913 23:28:07.014224   17442 system_pods.go:89] "coredns-7c65d6cfc9-khsrk" [f4523de2-fd1a-4429-8bb0-593cdcebc8d3] Running
	I0913 23:28:07.014233   17442 system_pods.go:89] "csi-hostpath-attacher-0" [23867dcc-737c-47e4-b93d-ae6177be3088] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 23:28:07.014239   17442 system_pods.go:89] "csi-hostpath-resizer-0" [9bacd521-508a-4d14-be54-d8ef696790e6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 23:28:07.014246   17442 system_pods.go:89] "csi-hostpathplugin-qh7d2" [d24f9989-c47a-4568-81d6-b463704b2bb1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 23:28:07.014251   17442 system_pods.go:89] "etcd-ubuntu-20-agent-2" [6ca55116-9e5f-4b31-a5d1-22ca33473230] Running
	I0913 23:28:07.014256   17442 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [11e78520-bf53-4063-b809-908d3527afdf] Running
	I0913 23:28:07.014261   17442 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [adc882dd-b519-4eea-a197-4f596b408017] Running
	I0913 23:28:07.014265   17442 system_pods.go:89] "kube-proxy-ccmtg" [73fdfb4e-c926-4ccd-b35e-5df2208acfb3] Running
	I0913 23:28:07.014268   17442 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [56c0780d-d761-48fe-a15f-1b1113e80709] Running
	I0913 23:28:07.014274   17442 system_pods.go:89] "metrics-server-84c5f94fbc-trf62" [f9b8c6b4-aeff-4750-9bb3-7e3953c5258c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 23:28:07.014280   17442 system_pods.go:89] "nvidia-device-plugin-daemonset-v6s2b" [44534f93-107d-44e4-a5f2-7fd26e600251] Running
	I0913 23:28:07.014286   17442 system_pods.go:89] "registry-66c9cd494c-rctqp" [5e62917f-fbaf-47a4-ab23-4c40518c66e2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0913 23:28:07.014294   17442 system_pods.go:89] "registry-proxy-tcgsk" [2c97d680-312e-4178-b7a5-ec0b4dacb6a2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0913 23:28:07.014301   17442 system_pods.go:89] "snapshot-controller-56fcc65765-2ch4n" [7ff7b1c1-f704-4f2d-b60a-a94b57b31f28] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 23:28:07.014307   17442 system_pods.go:89] "snapshot-controller-56fcc65765-zkp5d" [a54db17e-a825-43cb-99e5-5344c179faa4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 23:28:07.014311   17442 system_pods.go:89] "storage-provisioner" [26a3ab50-0a6c-4ffd-a198-a6437b518845] Running
	I0913 23:28:07.014323   17442 system_pods.go:89] "tiller-deploy-b48cc5f79-4688p" [baecb9b8-e7ad-4c4a-a256-fb125074d61b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0913 23:28:07.014330   17442 system_pods.go:126] duration metric: took 205.001192ms to wait for k8s-apps to be running ...
	I0913 23:28:07.014342   17442 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 23:28:07.014392   17442 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0913 23:28:07.026453   17442 system_svc.go:56] duration metric: took 12.10027ms WaitForService to wait for kubelet
	I0913 23:28:07.026484   17442 kubeadm.go:582] duration metric: took 9.94902379s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 23:28:07.026508   17442 node_conditions.go:102] verifying NodePressure condition ...
	I0913 23:28:07.114061   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:07.187042   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:07.209621   17442 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0913 23:28:07.209644   17442 node_conditions.go:123] node cpu capacity is 8
	I0913 23:28:07.209655   17442 node_conditions.go:105] duration metric: took 183.142206ms to run NodePressure ...
	I0913 23:28:07.209669   17442 start.go:241] waiting for startup goroutines ...
	I0913 23:28:07.660930   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:07.686656   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:08.113918   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:08.186516   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:08.614408   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:08.687971   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:09.113705   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:09.187531   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:09.614251   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:09.687432   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:10.163034   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:10.187003   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:10.613039   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:10.687212   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:11.113325   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:11.187369   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:11.614044   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:11.687211   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:12.114681   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:12.188063   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:12.614060   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:12.687342   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:13.113893   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:13.187015   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:13.614623   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:13.714370   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:14.113532   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:14.186792   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:14.614484   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:14.713667   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:15.113470   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:15.186451   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:15.614690   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:15.687660   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:16.113732   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:16.186428   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:16.614450   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:16.687557   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:17.114196   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:17.186555   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:17.613522   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:17.687089   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:18.114931   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:18.187122   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:18.613753   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:18.686964   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:19.115012   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:19.187641   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:19.613471   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:19.687737   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:20.113715   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:20.186952   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:20.612958   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:20.686835   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:21.113655   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:21.186861   17442 kapi.go:107] duration metric: took 23.003383575s to wait for kubernetes.io/minikube-addons=registry ...
	I0913 23:28:21.613945   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:22.113866   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:22.613881   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:23.113883   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:23.613009   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:24.114085   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:24.614175   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:25.114159   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:25.669718   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:26.113432   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:26.613559   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:27.113299   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:27.614422   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:28.113230   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:28.614256   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:29.113804   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:29.615859   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:30.113486   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:30.613705   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:31.112928   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:31.614348   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:32.113810   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:32.613817   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:33.164616   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:33.614121   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:34.113629   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:34.613633   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:35.114205   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:35.614026   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:36.114365   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:36.613961   17442 kapi.go:107] duration metric: took 36.004765397s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0913 23:28:46.443782   17442 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0913 23:28:46.443805   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:46.943966   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:47.444366   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:47.944472   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:48.443683   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:48.943622   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:49.443611   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:49.943632   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:50.443857   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:50.943448   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:51.443431   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:51.943124   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:52.444424   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:52.944439   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:53.444624   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:53.944516   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:54.443056   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:54.944192   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:55.443987   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:55.943466   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:56.443747   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:56.943491   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:57.443402   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:57.943845   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:58.443980   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:58.943378   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:59.443272   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:28:59.944211   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:00.443585   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:00.943540   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:01.443218   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:01.944328   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:02.445059   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:02.943476   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:03.443803   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:03.943718   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:04.443449   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:04.943279   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:05.443089   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:05.943855   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:06.443994   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:06.943369   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:07.443174   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:07.943654   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:08.444871   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:08.945060   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:09.443998   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:09.943913   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:10.443414   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:10.943980   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:11.443800   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:11.944034   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:12.443952   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:12.943514   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:13.443554   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:13.943267   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:14.444447   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:14.943233   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:15.444177   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:15.944365   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:16.444555   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:16.943186   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:17.443040   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:17.943526   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:18.444689   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:18.943399   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:19.444695   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:19.943372   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:20.444267   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:20.944044   17442 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:21.443962   17442 kapi.go:107] duration metric: took 1m16.503481915s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0913 23:29:21.445571   17442 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0913 23:29:21.446718   17442 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0913 23:29:21.447751   17442 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0913 23:29:21.449027   17442 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, helm-tiller, yakd, storage-provisioner, metrics-server, storage-provisioner-rancher, inspektor-gadget, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0913 23:29:21.450111   17442 addons.go:510] duration metric: took 1m24.377211463s for enable addons: enabled=[nvidia-device-plugin cloud-spanner default-storageclass helm-tiller yakd storage-provisioner metrics-server storage-provisioner-rancher inspektor-gadget volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0913 23:29:21.450148   17442 start.go:246] waiting for cluster config update ...
	I0913 23:29:21.450165   17442 start.go:255] writing updated cluster config ...
	I0913 23:29:21.450599   17442 exec_runner.go:51] Run: rm -f paused
	I0913 23:29:21.493240   17442 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 23:29:21.494882   17442 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Wed 2024-07-31 19:05:20 UTC, end at Fri 2024-09-13 23:39:12 UTC. --
	Sep 13 23:31:34 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:31:34.992217221Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 13 23:31:34 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:31:34.992241347Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 13 23:31:34 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:31:34.993734122Z" level=error msg="Error running exec e533f7a03195558123262731ef7964f0e29165461c449942a36827d1f30482cd in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 13 23:31:35 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:31:35.130622345Z" level=info msg="ignoring event" container=711372ea7579112482324d82bbc767aad2fef2ae5a068500dc91ba42e38c88e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:33:04 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:33:04.491123873Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 13 23:33:04 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:33:04.493465230Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 13 23:34:22 ubuntu-20-agent-2 cri-dockerd[17987]: time="2024-09-13T23:34:22Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 13 23:34:23 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:34:23.935673884Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 13 23:34:23 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:34:23.935669980Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 13 23:34:23 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:34:23.935911439Z" level=error msg="Error running exec 40ac74a1491c7b0b370cb6bcf953fe11636562f6624d8cd9aa345d65fd83f458 in container: cannot exec in a stopped state: unknown"
	Sep 13 23:34:23 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:34:23.943122077Z" level=info msg="ignoring event" container=21d89e8c1a06a530e62ca2bdcd0d4cf1e12769ab17ae82358d960c8eec1fd249 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:35:45 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:35:45.484945738Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 13 23:35:45 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:35:45.487200517Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed"
	Sep 13 23:38:11 ubuntu-20-agent-2 cri-dockerd[17987]: time="2024-09-13T23:38:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b69e8a65e61ebca38d7f37818d06e359c55521b015cc4bf57250667a2368ba55/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 13 23:38:11 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:38:11.980178400Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 13 23:38:11 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:38:11.982337509Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 13 23:38:24 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:38:24.484186873Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 13 23:38:24 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:38:24.486265903Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 13 23:38:51 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:38:51.481120082Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 13 23:38:51 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:38:51.483461942Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 13 23:39:11 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:39:11.446426487Z" level=info msg="ignoring event" container=b69e8a65e61ebca38d7f37818d06e359c55521b015cc4bf57250667a2368ba55 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:39:11 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:39:11.701274513Z" level=info msg="ignoring event" container=2ebbcb900f3700d0ca2292dc2b89433360ca28c60a19508c61c8946d94d822a6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:39:11 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:39:11.764950687Z" level=info msg="ignoring event" container=e2f0033587dd17f17a3606d4b326bad05ed6c66a374573a061d4cc3523188d95 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:39:11 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:39:11.836799615Z" level=info msg="ignoring event" container=2a2488f1d912d046c9764b6f02a056816a6a02c93ec27e35b4420835e0bdd2a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:39:11 ubuntu-20-agent-2 dockerd[17658]: time="2024-09-13T23:39:11.933189817Z" level=info msg="ignoring event" container=57b1ddfb7fe0df7119f5cbe758f89cde76790390c0558fb54b6c6cf09c9ea5ba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	21d89e8c1a06a       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            4 minutes ago       Exited              gadget                                   6                   bad6eca043379       gadget-sdn5b
	827f05ddb756c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   6771525fc52a5       gcp-auth-89d5ffd79-bzwl8
	3e9bc9c04de01       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          10 minutes ago      Running             csi-snapshotter                          0                   4a209c3f132b8       csi-hostpathplugin-qh7d2
	542a7fb821151       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          10 minutes ago      Running             csi-provisioner                          0                   4a209c3f132b8       csi-hostpathplugin-qh7d2
	704bc394e1da0       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            10 minutes ago      Running             liveness-probe                           0                   4a209c3f132b8       csi-hostpathplugin-qh7d2
	eea8d89601a01       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           10 minutes ago      Running             hostpath                                 0                   4a209c3f132b8       csi-hostpathplugin-qh7d2
	5981dcf86164d       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                10 minutes ago      Running             node-driver-registrar                    0                   4a209c3f132b8       csi-hostpathplugin-qh7d2
	7b0dacdba10d7       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             10 minutes ago      Running             csi-attacher                             0                   42eed5f9ce08f       csi-hostpath-attacher-0
	11b667c711180       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   10 minutes ago      Running             csi-external-health-monitor-controller   0                   4a209c3f132b8       csi-hostpathplugin-qh7d2
	76fc9ebfc91bc       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              10 minutes ago      Running             csi-resizer                              0                   4cf106afd0e45       csi-hostpath-resizer-0
	36193ea709ac9       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   3d5674af1b935       snapshot-controller-56fcc65765-zkp5d
	05cb335070790       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   1e728a024c656       snapshot-controller-56fcc65765-2ch4n
	c567be6a1b8a1       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       10 minutes ago      Running             local-path-provisioner                   0                   577d9f72f4259       local-path-provisioner-86d989889c-8cpks
	300fe28326e07       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        10 minutes ago      Running             metrics-server                           0                   c127241428eee       metrics-server-84c5f94fbc-trf62
	1d02daefef76a       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  10 minutes ago      Running             tiller                                   0                   0fcb9e41ae791       tiller-deploy-b48cc5f79-4688p
	532282de56302       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        10 minutes ago      Running             yakd                                     0                   ce4593d26e19e       yakd-dashboard-67d98fc6b-sw9q9
	af0d99f86d4a8       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               11 minutes ago      Running             cloud-spanner-emulator                   0                   01a3142769982       cloud-spanner-emulator-769b77f747-74qpc
	4b6d0c6b881eb       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     11 minutes ago      Running             nvidia-device-plugin-ctr                 0                   ae82fe92e0a42       nvidia-device-plugin-daemonset-v6s2b
	abb216077af8e       6e38f40d628db                                                                                                                                11 minutes ago      Running             storage-provisioner                      0                   f4d912900c480       storage-provisioner
	15e0a78ec98b2       c69fa2e9cbf5f                                                                                                                                11 minutes ago      Running             coredns                                  0                   662fe3b2cddb9       coredns-7c65d6cfc9-khsrk
	019949511d0d9       60c005f310ff3                                                                                                                                11 minutes ago      Running             kube-proxy                               0                   f7531a1d02e58       kube-proxy-ccmtg
	d114732958c9f       9aa1fad941575                                                                                                                                11 minutes ago      Running             kube-scheduler                           0                   15a502d6c5692       kube-scheduler-ubuntu-20-agent-2
	107eb8f53a410       2e96e5913fc06                                                                                                                                11 minutes ago      Running             etcd                                     0                   519ac31b50293       etcd-ubuntu-20-agent-2
	3250f8e27aaa1       6bab7719df100                                                                                                                                11 minutes ago      Running             kube-apiserver                           0                   e0deb68e6886c       kube-apiserver-ubuntu-20-agent-2
	60a40e57ebb9b       175ffd71cce3d                                                                                                                                11 minutes ago      Running             kube-controller-manager                  0                   ee6fb593cdabf       kube-controller-manager-ubuntu-20-agent-2
	
	
	==> coredns [15e0a78ec98b] <==
	[INFO] 10.244.0.9:44110 - 50929 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000098658s
	[INFO] 10.244.0.9:53793 - 2613 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000047901s
	[INFO] 10.244.0.9:53793 - 44599 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000066843s
	[INFO] 10.244.0.9:43965 - 12697 "AAAA IN registry.kube-system.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000071953s
	[INFO] 10.244.0.9:43965 - 5014 "A IN registry.kube-system.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000187309s
	[INFO] 10.244.0.9:51362 - 26271 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000065164s
	[INFO] 10.244.0.9:51362 - 32668 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000108995s
	[INFO] 10.244.0.9:46391 - 59099 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000061516s
	[INFO] 10.244.0.9:46391 - 20902 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000093824s
	[INFO] 10.244.0.9:42069 - 35515 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000051725s
	[INFO] 10.244.0.9:42069 - 20925 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00010539s
	[INFO] 10.244.0.23:59229 - 7472 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000254658s
	[INFO] 10.244.0.23:47377 - 42980 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000190858s
	[INFO] 10.244.0.23:57305 - 3750 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000133876s
	[INFO] 10.244.0.23:58058 - 63279 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000179995s
	[INFO] 10.244.0.23:40465 - 16597 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000135813s
	[INFO] 10.244.0.23:56245 - 26711 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000157066s
	[INFO] 10.244.0.23:59491 - 60528 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003573625s
	[INFO] 10.244.0.23:41293 - 11840 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003614149s
	[INFO] 10.244.0.23:46282 - 55483 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.001955201s
	[INFO] 10.244.0.23:60194 - 63676 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003349848s
	[INFO] 10.244.0.23:58776 - 38207 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.001689342s
	[INFO] 10.244.0.23:59639 - 5780 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003140238s
	[INFO] 10.244.0.23:41359 - 59953 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002283234s
	[INFO] 10.244.0.23:37101 - 21190 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.002453658s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T23_27_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-2
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-2"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 23:27:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 23:39:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 23:35:02 +0000   Fri, 13 Sep 2024 23:27:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 23:35:02 +0000   Fri, 13 Sep 2024 23:27:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 23:35:02 +0000   Fri, 13 Sep 2024 23:27:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 23:35:02 +0000   Fri, 13 Sep 2024 23:27:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859308Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859308Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    5f8c688f-34c7-4408-80b6-0648374c9e56
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  default                     cloud-spanner-emulator-769b77f747-74qpc      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-sdn5b                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-bzwl8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-khsrk                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-qh7d2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-ccmtg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-trf62              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-v6s2b         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-2ch4n         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-zkp5d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 tiller-deploy-b48cc5f79-4688p                0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-8cpks      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-sw9q9               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 11m   kube-proxy       
	  Normal   Starting                 11m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  11m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m   kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m   kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m   kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m   node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 be 3f 53 15 e6 08 06
	[  +1.104115] IPv4: martian source 10.244.0.1 from 10.244.0.12, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7a f2 cd 60 e7 38 08 06
	[  +0.033569] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f2 0d 87 06 e9 fd 08 06
	[  +2.547839] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8a 33 ff 72 38 65 08 06
	[  +1.885595] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 55 f6 8a e7 2f 08 06
	[  +1.825224] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 96 7b 0d 77 9b 66 08 06
	[  +4.615348] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 a1 5f ac 9a 1a 08 06
	[  +0.053690] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 4b fa bc 85 b0 08 06
	[  +0.226597] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea df 1e e6 98 4d 08 06
	[Sep13 23:29] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 0e af 98 60 99 d6 08 06
	[  +0.027122] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 02 73 62 ab ea 08 06
	[ +11.178978] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ea ff 80 b0 6e 3c 08 06
	[  +0.000406] IPv4: martian source 10.244.0.23 from 10.244.0.4, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 06 69 96 0e 7e 4f 08 06
	
	
	==> etcd [107eb8f53a41] <==
	{"level":"info","ts":"2024-09-13T23:27:49.032846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgPreVoteResp from 6b435b960bec7c3c at term 1"}
	{"level":"info","ts":"2024-09-13T23:27:49.032857Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became candidate at term 2"}
	{"level":"info","ts":"2024-09-13T23:27:49.032863Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-13T23:27:49.032872Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 2"}
	{"level":"info","ts":"2024-09-13T23:27:49.032881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-13T23:27:49.033825Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-13T23:27:49.033826Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T23:27:49.033825Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T23:27:49.033830Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T23:27:49.034120Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-13T23:27:49.034143Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-13T23:27:49.034519Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T23:27:49.034756Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T23:27:49.034910Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T23:27:49.035084Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T23:27:49.035107Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T23:27:49.035922Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-13T23:27:49.036273Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-09-13T23:28:25.565715Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.099599ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-09-13T23:28:25.565770Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.677327ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gcp-auth/gcp-auth-89d5ffd79.17f4f17efe0a4382\" ","response":"range_response_count:1 size:927"}
	{"level":"info","ts":"2024-09-13T23:28:25.565794Z","caller":"traceutil/trace.go:171","msg":"trace[1806587783] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:994; }","duration":"123.195982ms","start":"2024-09-13T23:28:25.442586Z","end":"2024-09-13T23:28:25.565782Z","steps":["trace[1806587783] 'range keys from in-memory index tree'  (duration: 123.048641ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T23:28:25.565816Z","caller":"traceutil/trace.go:171","msg":"trace[1872658424] range","detail":"{range_begin:/registry/events/gcp-auth/gcp-auth-89d5ffd79.17f4f17efe0a4382; range_end:; response_count:1; response_revision:994; }","duration":"116.736972ms","start":"2024-09-13T23:28:25.449067Z","end":"2024-09-13T23:28:25.565804Z","steps":["trace[1872658424] 'range keys from in-memory index tree'  (duration: 116.542816ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-13T23:37:49.051382Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1712}
	{"level":"info","ts":"2024-09-13T23:37:49.075410Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1712,"took":"23.483501ms","hash":3552401535,"current-db-size-bytes":8245248,"current-db-size":"8.2 MB","current-db-size-in-use-bytes":4403200,"current-db-size-in-use":"4.4 MB"}
	{"level":"info","ts":"2024-09-13T23:37:49.075453Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3552401535,"revision":1712,"compact-revision":-1}
	
	
	==> gcp-auth [827f05ddb756] <==
	2024/09/13 23:29:20 GCP Auth Webhook started!
	2024/09/13 23:29:36 Ready to marshal response ...
	2024/09/13 23:29:36 Ready to write response ...
	2024/09/13 23:29:37 Ready to marshal response ...
	2024/09/13 23:29:37 Ready to write response ...
	2024/09/13 23:29:58 Ready to marshal response ...
	2024/09/13 23:29:58 Ready to write response ...
	2024/09/13 23:29:59 Ready to marshal response ...
	2024/09/13 23:29:59 Ready to write response ...
	2024/09/13 23:29:59 Ready to marshal response ...
	2024/09/13 23:29:59 Ready to write response ...
	2024/09/13 23:38:11 Ready to marshal response ...
	2024/09/13 23:38:11 Ready to write response ...
	
	
	==> kernel <==
	 23:39:12 up 21 min,  0 users,  load average: 0.05, 0.30, 0.39
	Linux ubuntu-20-agent-2 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [3250f8e27aaa] <==
	W0913 23:28:40.203029       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.53.178:443: connect: connection refused
	W0913 23:28:45.931799       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.9.58:443: connect: connection refused
	E0913 23:28:45.931834       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.9.58:443: connect: connection refused" logger="UnhandledError"
	W0913 23:29:07.959667       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.9.58:443: connect: connection refused
	E0913 23:29:07.959700       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.9.58:443: connect: connection refused" logger="UnhandledError"
	W0913 23:29:07.968621       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.9.58:443: connect: connection refused
	E0913 23:29:07.968662       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.9.58:443: connect: connection refused" logger="UnhandledError"
	I0913 23:29:36.753711       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0913 23:29:36.770826       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0913 23:29:49.162201       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0913 23:29:49.171815       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0913 23:29:49.313660       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0913 23:29:49.315842       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0913 23:29:49.316348       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0913 23:29:49.367560       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0913 23:29:49.411053       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0913 23:29:49.443496       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0913 23:29:49.500516       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0913 23:29:50.188462       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0913 23:29:50.358264       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0913 23:29:50.368220       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0913 23:29:50.484790       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0913 23:29:50.500768       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0913 23:29:50.526533       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0913 23:29:50.681799       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	
	
	==> kube-controller-manager [60a40e57ebb9] <==
	W0913 23:37:45.422041       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:37:45.422087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:37:56.053950       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:37:56.053992       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:38:02.083985       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:38:02.084025       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:38:16.131898       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:38:16.131944       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:38:20.386879       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:38:20.386917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:38:22.557216       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:38:22.557259       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:38:27.736637       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:38:27.736677       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:38:44.636972       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:38:44.637024       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:38:50.376995       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:38:50.377046       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:38:53.487453       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:38:53.487491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:38:56.027972       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:38:56.028014       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:39:03.361730       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:39:03.361772       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 23:39:11.668075       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="5.251µs"
	
	
	==> kube-proxy [019949511d0d] <==
	I0913 23:27:59.081320       1 server_linux.go:66] "Using iptables proxy"
	I0913 23:27:59.237625       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0913 23:27:59.237701       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 23:27:59.305437       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0913 23:27:59.305500       1 server_linux.go:169] "Using iptables Proxier"
	I0913 23:27:59.309511       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 23:27:59.310208       1 server.go:483] "Version info" version="v1.31.1"
	I0913 23:27:59.310825       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 23:27:59.312365       1 config.go:199] "Starting service config controller"
	I0913 23:27:59.312391       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 23:27:59.312424       1 config.go:105] "Starting endpoint slice config controller"
	I0913 23:27:59.312450       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 23:27:59.313285       1 config.go:328] "Starting node config controller"
	I0913 23:27:59.313319       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 23:27:59.413518       1 shared_informer.go:320] Caches are synced for node config
	I0913 23:27:59.413549       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 23:27:59.413569       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [d114732958c9] <==
	W0913 23:27:49.904981       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0913 23:27:49.904996       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0913 23:27:49.905008       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0913 23:27:49.905010       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:49.905033       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0913 23:27:49.905054       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:49.905109       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0913 23:27:49.905135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:50.739043       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0913 23:27:50.739090       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0913 23:27:50.801523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0913 23:27:50.801579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:50.813862       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0913 23:27:50.813895       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:50.828130       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0913 23:27:50.828159       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:50.960629       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0913 23:27:50.960677       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:50.962487       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0913 23:27:50.962526       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:51.024964       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0913 23:27:51.025009       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0913 23:27:51.086388       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0913 23:27:51.086428       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0913 23:27:53.103279       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Wed 2024-07-31 19:05:20 UTC, end at Fri 2024-09-13 23:39:12 UTC. --
	Sep 13 23:39:04 ubuntu-20-agent-2 kubelet[18873]: E0913 23:39:04.340268   18873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="0aafacff-c327-44f0-8b19-d7c6a9a05a27"
	Sep 13 23:39:05 ubuntu-20-agent-2 kubelet[18873]: E0913 23:39:05.340024   18873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="fc311dc8-dddc-4bc6-b618-190017fbb792"
	Sep 13 23:39:11 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:11.637325   18873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/fc311dc8-dddc-4bc6-b618-190017fbb792-gcp-creds\") pod \"fc311dc8-dddc-4bc6-b618-190017fbb792\" (UID: \"fc311dc8-dddc-4bc6-b618-190017fbb792\") "
	Sep 13 23:39:11 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:11.637386   18873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lvm9\" (UniqueName: \"kubernetes.io/projected/fc311dc8-dddc-4bc6-b618-190017fbb792-kube-api-access-9lvm9\") pod \"fc311dc8-dddc-4bc6-b618-190017fbb792\" (UID: \"fc311dc8-dddc-4bc6-b618-190017fbb792\") "
	Sep 13 23:39:11 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:11.637434   18873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc311dc8-dddc-4bc6-b618-190017fbb792-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "fc311dc8-dddc-4bc6-b618-190017fbb792" (UID: "fc311dc8-dddc-4bc6-b618-190017fbb792"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 13 23:39:11 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:11.639415   18873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc311dc8-dddc-4bc6-b618-190017fbb792-kube-api-access-9lvm9" (OuterVolumeSpecName: "kube-api-access-9lvm9") pod "fc311dc8-dddc-4bc6-b618-190017fbb792" (UID: "fc311dc8-dddc-4bc6-b618-190017fbb792"). InnerVolumeSpecName "kube-api-access-9lvm9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 23:39:11 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:11.737986   18873 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9lvm9\" (UniqueName: \"kubernetes.io/projected/fc311dc8-dddc-4bc6-b618-190017fbb792-kube-api-access-9lvm9\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 13 23:39:11 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:11.738031   18873 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/fc311dc8-dddc-4bc6-b618-190017fbb792-gcp-creds\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.040155   18873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh2k9\" (UniqueName: \"kubernetes.io/projected/5e62917f-fbaf-47a4-ab23-4c40518c66e2-kube-api-access-gh2k9\") pod \"5e62917f-fbaf-47a4-ab23-4c40518c66e2\" (UID: \"5e62917f-fbaf-47a4-ab23-4c40518c66e2\") "
	Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.042169   18873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e62917f-fbaf-47a4-ab23-4c40518c66e2-kube-api-access-gh2k9" (OuterVolumeSpecName: "kube-api-access-gh2k9") pod "5e62917f-fbaf-47a4-ab23-4c40518c66e2" (UID: "5e62917f-fbaf-47a4-ab23-4c40518c66e2"). InnerVolumeSpecName "kube-api-access-gh2k9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.141307   18873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4r6v\" (UniqueName: \"kubernetes.io/projected/2c97d680-312e-4178-b7a5-ec0b4dacb6a2-kube-api-access-r4r6v\") pod \"2c97d680-312e-4178-b7a5-ec0b4dacb6a2\" (UID: \"2c97d680-312e-4178-b7a5-ec0b4dacb6a2\") "
	Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.141424   18873 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gh2k9\" (UniqueName: \"kubernetes.io/projected/5e62917f-fbaf-47a4-ab23-4c40518c66e2-kube-api-access-gh2k9\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.143360   18873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c97d680-312e-4178-b7a5-ec0b4dacb6a2-kube-api-access-r4r6v" (OuterVolumeSpecName: "kube-api-access-r4r6v") pod "2c97d680-312e-4178-b7a5-ec0b4dacb6a2" (UID: "2c97d680-312e-4178-b7a5-ec0b4dacb6a2"). InnerVolumeSpecName "kube-api-access-r4r6v". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.222819   18873 scope.go:117] "RemoveContainer" containerID="e2f0033587dd17f17a3606d4b326bad05ed6c66a374573a061d4cc3523188d95"
	Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.241777   18873 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-r4r6v\" (UniqueName: \"kubernetes.io/projected/2c97d680-312e-4178-b7a5-ec0b4dacb6a2-kube-api-access-r4r6v\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.244714   18873 scope.go:117] "RemoveContainer" containerID="e2f0033587dd17f17a3606d4b326bad05ed6c66a374573a061d4cc3523188d95"
	Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: E0913 23:39:12.246219   18873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: e2f0033587dd17f17a3606d4b326bad05ed6c66a374573a061d4cc3523188d95" containerID="e2f0033587dd17f17a3606d4b326bad05ed6c66a374573a061d4cc3523188d95"
	Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.246261   18873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"e2f0033587dd17f17a3606d4b326bad05ed6c66a374573a061d4cc3523188d95"} err="failed to get container status \"e2f0033587dd17f17a3606d4b326bad05ed6c66a374573a061d4cc3523188d95\": rpc error: code = Unknown desc = Error response from daemon: No such container: e2f0033587dd17f17a3606d4b326bad05ed6c66a374573a061d4cc3523188d95"
	Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.246286   18873 scope.go:117] "RemoveContainer" containerID="2ebbcb900f3700d0ca2292dc2b89433360ca28c60a19508c61c8946d94d822a6"
	Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.263157   18873 scope.go:117] "RemoveContainer" containerID="2ebbcb900f3700d0ca2292dc2b89433360ca28c60a19508c61c8946d94d822a6"
	Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: E0913 23:39:12.264140   18873 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 2ebbcb900f3700d0ca2292dc2b89433360ca28c60a19508c61c8946d94d822a6" containerID="2ebbcb900f3700d0ca2292dc2b89433360ca28c60a19508c61c8946d94d822a6"
	Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.264183   18873 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"2ebbcb900f3700d0ca2292dc2b89433360ca28c60a19508c61c8946d94d822a6"} err="failed to get container status \"2ebbcb900f3700d0ca2292dc2b89433360ca28c60a19508c61c8946d94d822a6\": rpc error: code = Unknown desc = Error response from daemon: No such container: 2ebbcb900f3700d0ca2292dc2b89433360ca28c60a19508c61c8946d94d822a6"
	Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.349916   18873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c97d680-312e-4178-b7a5-ec0b4dacb6a2" path="/var/lib/kubelet/pods/2c97d680-312e-4178-b7a5-ec0b4dacb6a2/volumes"
	Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.350313   18873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e62917f-fbaf-47a4-ab23-4c40518c66e2" path="/var/lib/kubelet/pods/5e62917f-fbaf-47a4-ab23-4c40518c66e2/volumes"
	Sep 13 23:39:12 ubuntu-20-agent-2 kubelet[18873]: I0913 23:39:12.350651   18873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc311dc8-dddc-4bc6-b618-190017fbb792" path="/var/lib/kubelet/pods/fc311dc8-dddc-4bc6-b618-190017fbb792/volumes"
	
	
	==> storage-provisioner [abb216077af8] <==
	I0913 23:27:59.660286       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 23:27:59.700394       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 23:27:59.700437       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0913 23:27:59.734316       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0913 23:27:59.734536       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_e62359e4-2a85-41ce-86b1-424cab9a588f!
	I0913 23:27:59.735816       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e28357d7-c01f-4c16-bd58-06cf827c1b74", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_e62359e4-2a85-41ce-86b1-424cab9a588f became leader
	I0913 23:27:59.835949       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_e62359e4-2a85-41ce-86b1-424cab9a588f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ubuntu-20-agent-2/10.138.0.48
	Start Time:       Fri, 13 Sep 2024 23:29:59 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j76hf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-j76hf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m13s                  default-scheduler  Successfully assigned default/busybox to ubuntu-20-agent-2
	  Normal   Pulling    7m39s (x4 over 9m13s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m39s (x4 over 9m13s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m39s (x4 over 9m13s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m24s (x6 over 9m13s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m4s (x20 over 9m13s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.79s)

                                                
                                    

Test pass (111/168)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 2.06
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 1.02
15 TestDownloadOnly/v1.31.1/binaries 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.05
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.54
22 TestOffline 71.45
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.04
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.04
27 TestAddons/Setup 101.5
29 TestAddons/serial/Volcano 37.35
31 TestAddons/serial/GCPAuth/Namespaces 0.12
35 TestAddons/parallel/InspektorGadget 10.47
36 TestAddons/parallel/MetricsServer 5.39
37 TestAddons/parallel/HelmTiller 9.16
39 TestAddons/parallel/CSI 45.13
40 TestAddons/parallel/Headlamp 14.87
41 TestAddons/parallel/CloudSpanner 5.25
43 TestAddons/parallel/NvidiaDevicePlugin 6.22
44 TestAddons/parallel/Yakd 10.39
45 TestAddons/StoppedEnableDisable 10.75
47 TestCertExpiration 228.2
58 TestFunctional/serial/CopySyncFile 0
59 TestFunctional/serial/StartWithProxy 30.47
60 TestFunctional/serial/AuditLog 0
61 TestFunctional/serial/SoftStart 27.01
62 TestFunctional/serial/KubeContext 0.04
63 TestFunctional/serial/KubectlGetPods 0.07
65 TestFunctional/serial/MinikubeKubectlCmd 0.11
66 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
67 TestFunctional/serial/ExtraConfig 34.83
68 TestFunctional/serial/ComponentHealth 0.06
69 TestFunctional/serial/LogsCmd 0.78
70 TestFunctional/serial/LogsFileCmd 0.82
71 TestFunctional/serial/InvalidService 3.97
73 TestFunctional/parallel/ConfigCmd 0.25
74 TestFunctional/parallel/DashboardCmd 7.21
75 TestFunctional/parallel/DryRun 0.15
76 TestFunctional/parallel/InternationalLanguage 0.08
77 TestFunctional/parallel/StatusCmd 0.41
80 TestFunctional/parallel/ProfileCmd/profile_not_create 0.23
81 TestFunctional/parallel/ProfileCmd/profile_list 0.21
82 TestFunctional/parallel/ProfileCmd/profile_json_output 0.21
84 TestFunctional/parallel/ServiceCmd/DeployApp 10.14
85 TestFunctional/parallel/ServiceCmd/List 0.33
86 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
87 TestFunctional/parallel/ServiceCmd/HTTPS 0.15
88 TestFunctional/parallel/ServiceCmd/Format 0.15
89 TestFunctional/parallel/ServiceCmd/URL 0.14
90 TestFunctional/parallel/ServiceCmdConnect 7.29
91 TestFunctional/parallel/AddonsCmd 0.11
92 TestFunctional/parallel/PersistentVolumeClaim 21.24
95 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.26
96 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
98 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.18
99 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
100 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
104 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
107 TestFunctional/parallel/MySQL 21.35
111 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
112 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 12.71
113 TestFunctional/parallel/UpdateContextCmd/no_clusters 13.33
116 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/Version/short 0.04
121 TestFunctional/parallel/Version/components 0.38
122 TestFunctional/parallel/License 0.21
123 TestFunctional/delete_echo-server_images 0.03
124 TestFunctional/delete_my-image_image 0.02
125 TestFunctional/delete_minikube_cached_images 0.02
130 TestImageBuild/serial/Setup 14.9
131 TestImageBuild/serial/NormalBuild 1.44
132 TestImageBuild/serial/BuildWithBuildArg 0.84
133 TestImageBuild/serial/BuildWithDockerIgnore 0.6
134 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.51
138 TestJSONOutput/start/Command 30.26
139 TestJSONOutput/start/Audit 0
141 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
142 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
144 TestJSONOutput/pause/Command 0.51
145 TestJSONOutput/pause/Audit 0
147 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
148 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
150 TestJSONOutput/unpause/Command 0.4
151 TestJSONOutput/unpause/Audit 0
153 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
154 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
156 TestJSONOutput/stop/Command 5.3
157 TestJSONOutput/stop/Audit 0
159 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
160 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
161 TestErrorJSONOutput 0.19
166 TestMainNoArgs 0.04
167 TestMinikubeProfile 34.22
175 TestPause/serial/Start 29.34
176 TestPause/serial/SecondStartNoReconfiguration 29.82
177 TestPause/serial/Pause 0.51
178 TestPause/serial/VerifyStatus 0.13
179 TestPause/serial/Unpause 0.41
180 TestPause/serial/PauseAgain 0.54
181 TestPause/serial/DeletePaused 1.59
182 TestPause/serial/VerifyDeletedResources 0.06
196 TestRunningBinaryUpgrade 70
198 TestStoppedBinaryUpgrade/Setup 0.48
199 TestStoppedBinaryUpgrade/Upgrade 49.31
200 TestStoppedBinaryUpgrade/MinikubeLogs 0.76
201 TestKubernetesUpgrade 317.34
x
+
TestDownloadOnly/v1.20.0/json-events (2.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (2.056877208s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (2.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (54.198243ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 23:26:24
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 23:26:24.082972   13684 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:26:24.083074   13684 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:26:24.083086   13684 out.go:358] Setting ErrFile to fd 2...
	I0913 23:26:24.083091   13684 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:26:24.083246   13684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5268/.minikube/bin
	W0913 23:26:24.083370   13684 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19640-5268/.minikube/config/config.json: open /home/jenkins/minikube-integration/19640-5268/.minikube/config/config.json: no such file or directory
	I0913 23:26:24.083928   13684 out.go:352] Setting JSON to true
	I0913 23:26:24.084818   13684 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":536,"bootTime":1726269448,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 23:26:24.084905   13684 start.go:139] virtualization: kvm guest
	I0913 23:26:24.087192   13684 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0913 23:26:24.087323   13684 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19640-5268/.minikube/cache/preloaded-tarball: no such file or directory
	I0913 23:26:24.087337   13684 notify.go:220] Checking for updates...
	I0913 23:26:24.088491   13684 out.go:169] MINIKUBE_LOCATION=19640
	I0913 23:26:24.089630   13684 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 23:26:24.090874   13684 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19640-5268/kubeconfig
	I0913 23:26:24.092209   13684 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5268/.minikube
	I0913 23:26:24.093387   13684 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (1.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (1.022814894s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (1.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
--- PASS: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (54.243773ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 23:26:26
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 23:26:26.418914   13837 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:26:26.419153   13837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:26:26.419162   13837 out.go:358] Setting ErrFile to fd 2...
	I0913 23:26:26.419169   13837 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:26:26.419341   13837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5268/.minikube/bin
	I0913 23:26:26.419897   13837 out.go:352] Setting JSON to true
	I0913 23:26:26.420720   13837 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":538,"bootTime":1726269448,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 23:26:26.420807   13837 start.go:139] virtualization: kvm guest
	I0913 23:26:26.422669   13837 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0913 23:26:26.422765   13837 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19640-5268/.minikube/cache/preloaded-tarball: no such file or directory
	I0913 23:26:26.422813   13837 notify.go:220] Checking for updates...
	I0913 23:26:26.423990   13837 out.go:169] MINIKUBE_LOCATION=19640
	I0913 23:26:26.425344   13837 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 23:26:26.426581   13837 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19640-5268/kubeconfig
	I0913 23:26:26.427912   13837 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5268/.minikube
	I0913 23:26:26.429166   13837 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:43107 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (71.45s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (1m9.901676822s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.549932579s)
--- PASS: TestOffline (71.45s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (43.716ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (44.787163ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/Setup (101.5s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm --addons=helm-tiller: (1m41.498587927s)
--- PASS: TestAddons/Setup (101.50s)

                                                
                                    
x
+
TestAddons/serial/Volcano (37.35s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 9.393838ms
addons_test.go:905: volcano-admission stabilized in 9.431558ms
addons_test.go:913: volcano-controller stabilized in 9.532466ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-7hlf6" [c84a685a-7e65-4586-a6e6-16cf06594c4c] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003496383s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-zv4h6" [eb139bbb-1406-414d-81ad-77b313f9a94e] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00374885s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-76r5t" [e2ffdc55-b409-419b-b7c4-04f4f736e50c] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003595516s
addons_test.go:932: (dbg) Run:  kubectl --context minikube delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context minikube create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [ca3ee961-f0a8-41d6-b8b6-19ffa6f5d0ec] Pending
helpers_test.go:344: "test-job-nginx-0" [ca3ee961-f0a8-41d6-b8b6-19ffa6f5d0ec] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [ca3ee961-f0a8-41d6-b8b6-19ffa6f5d0ec] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.003250944s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.025658966s)
--- PASS: TestAddons/serial/Volcano (37.35s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.47s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-sdn5b" [c3638dfb-58d8-4c19-aa40-457785bb4d88] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004796234s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube: (5.459854505s)
--- PASS: TestAddons/parallel/InspektorGadget (10.47s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.39s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.968222ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-trf62" [f9b8c6b4-aeff-4750-9bb3-7e3953c5258c] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003435753s
addons_test.go:417: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.39s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.16s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.836201ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-4688p" [baecb9b8-e7ad-4c4a-a256-fb125074d61b] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003241661s
addons_test.go:475: (dbg) Run:  kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context minikube run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.877310076s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.16s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.13s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 4.422938ms
addons_test.go:570: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [13d47e44-90bd-44fa-b632-fd36e0432f76] Pending
helpers_test.go:344: "task-pv-pod" [13d47e44-90bd-44fa-b632-fd36e0432f76] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [13d47e44-90bd-44fa-b632-fd36e0432f76] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004789646s
addons_test.go:590: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context minikube delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [4ed1013f-59ff-4daa-a629-a924dfc95ef6] Pending
helpers_test.go:344: "task-pv-pod-restore" [4ed1013f-59ff-4daa-a629-a924dfc95ef6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [4ed1013f-59ff-4daa-a629-a924dfc95ef6] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003688599s
addons_test.go:632: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context minikube delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context minikube delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.287123181s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (45.13s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-h5thm" [6b4d78a4-d279-45eb-b600-0c0c54a60499] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-h5thm" [6b4d78a4-d279-45eb-b600-0c0c54a60499] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.003929942s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1: (5.389646381s)
--- PASS: TestAddons/parallel/Headlamp (14.87s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-74qpc" [65e423c5-742b-4eb3-a0cc-ba52ff2a62bd] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003903341s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.22s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-v6s2b" [44534f93-107d-44e4-a5f2-7fd26e600251] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003926746s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p minikube
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.22s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-sw9q9" [96511809-85f6-46e4-8a6c-b065c8844121] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003822149s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.382315221s)
--- PASS: TestAddons/parallel/Yakd (10.39s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.75s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.464037213s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.75s)

                                                
                                    
x
+
TestCertExpiration (228.2s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (13.651204854s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (32.882442076s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.667211807s)
--- PASS: TestCertExpiration (228.20s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19640-5268/.minikube/files/etc/test/nested/copy/13672/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (30.47s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (30.473353524s)
--- PASS: TestFunctional/serial/StartWithProxy (30.47s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.01s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (27.005760281s)
functional_test.go:663: soft start took 27.006465401s for "minikube" cluster.
--- PASS: TestFunctional/serial/SoftStart (27.01s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.83s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.830507869s)
functional_test.go:761: restart took 34.830657173s for "minikube" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.83s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.78s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd1074150991/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.82s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.97s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (150.493323ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://10.138.0.48:32028 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.97s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (40.09149ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (39.622128ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2024/09/13 23:46:45 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 48670: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (77.082054ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-5268/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5268/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 23:46:45.754499   49053 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:46:45.754614   49053 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:46:45.754625   49053 out.go:358] Setting ErrFile to fd 2...
	I0913 23:46:45.754631   49053 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:46:45.754815   49053 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5268/.minikube/bin
	I0913 23:46:45.755363   49053 out.go:352] Setting JSON to false
	I0913 23:46:45.756416   49053 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1758,"bootTime":1726269448,"procs":253,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 23:46:45.756513   49053 start.go:139] virtualization: kvm guest
	I0913 23:46:45.758715   49053 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0913 23:46:45.760039   49053 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19640-5268/.minikube/cache/preloaded-tarball: no such file or directory
	I0913 23:46:45.760079   49053 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 23:46:45.760084   49053 notify.go:220] Checking for updates...
	I0913 23:46:45.761502   49053 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 23:46:45.762810   49053 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5268/kubeconfig
	I0913 23:46:45.764052   49053 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5268/.minikube
	I0913 23:46:45.765276   49053 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 23:46:45.766371   49053 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 23:46:45.767911   49053 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 23:46:45.768204   49053 exec_runner.go:51] Run: systemctl --version
	I0913 23:46:45.770922   49053 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 23:46:45.782732   49053 out.go:177] * Using the none driver based on existing profile
	I0913 23:46:45.784108   49053 start.go:297] selected driver: none
	I0913 23:46:45.784125   49053 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:46:45.784272   49053 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 23:46:45.784303   49053 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0913 23:46:45.784771   49053 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0913 23:46:45.786745   49053 out.go:201] 
	W0913 23:46:45.787941   49053 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0913 23:46:45.789074   49053 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (77.44202ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-5268/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5268/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 23:46:45.908032   49100 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:46:45.908146   49100 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:46:45.908155   49100 out.go:358] Setting ErrFile to fd 2...
	I0913 23:46:45.908160   49100 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:46:45.908433   49100 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-5268/.minikube/bin
	I0913 23:46:45.908943   49100 out.go:352] Setting JSON to false
	I0913 23:46:45.909911   49100 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1758,"bootTime":1726269448,"procs":253,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0913 23:46:45.910021   49100 start.go:139] virtualization: kvm guest
	I0913 23:46:45.912089   49100 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	W0913 23:46:45.913704   49100 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19640-5268/.minikube/cache/preloaded-tarball: no such file or directory
	I0913 23:46:45.913731   49100 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 23:46:45.913747   49100 notify.go:220] Checking for updates...
	I0913 23:46:45.916430   49100 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 23:46:45.917675   49100 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-5268/kubeconfig
	I0913 23:46:45.918852   49100 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5268/.minikube
	I0913 23:46:45.920074   49100 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0913 23:46:45.921326   49100 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 23:46:45.922911   49100 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 23:46:45.923228   49100 exec_runner.go:51] Run: systemctl --version
	I0913 23:46:45.925682   49100 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 23:46:45.936148   49100 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0913 23:46:45.937284   49100 start.go:297] selected driver: none
	I0913 23:46:45.937298   49100 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:46:45.937405   49100 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 23:46:45.937428   49100 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0913 23:46:45.937754   49100 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0913 23:46:45.939777   49100 out.go:201] 
	W0913 23:46:45.940876   49100 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0913 23:46:45.941902   49100 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "170.023275ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "43.475893ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "167.708404ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "43.813709ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-nnnf4" [30053ab3-4d44-43c9-98fe-a23bb4d6ccdf] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-nnnf4" [30053ab3-4d44-43c9-98fe-a23bb4d6ccdf] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004073597s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1494: Took "325.532887ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://10.138.0.48:30531
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://10.138.0.48:30531
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-8fwtd" [a966443a-d1f4-40f3-b0af-d8ec99165176] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-8fwtd" [a966443a-d1f4-40f3-b0af-d8ec99165176] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.00345502s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://10.138.0.48:31088
functional_test.go:1675: http://10.138.0.48:31088: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-8fwtd

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.138.0.48:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.138.0.48:31088
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.29s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (21.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1368aef1-16b1-48c6-a194-130518a321a1] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004257144s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3f757b60-85d5-470a-a495-c532b3b80c90] Pending
helpers_test.go:344: "sp-pod" [3f757b60-85d5-470a-a495-c532b3b80c90] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3f757b60-85d5-470a-a495-c532b3b80c90] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003431105s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [508bf84a-8c0b-4f2b-b228-15cdf843fc44] Pending
helpers_test.go:344: "sp-pod" [508bf84a-8c0b-4f2b-b228-15cdf843fc44] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [508bf84a-8c0b-4f2b-b228-15cdf843fc44] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003209139s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (21.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 50775: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context minikube apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [9ea585e2-e277-4e6e-9c5c-9c3c26c26b60] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [9ea585e2-e277-4e6e-9c5c-9c3c26c26b60] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003009087s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context minikube get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.181.71 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-8hbpc" [680bdfde-29e3-4f10-af51-05fdf90ca7ee] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-8hbpc" [680bdfde-29e3-4f10-af51-05fdf90ca7ee] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.002900053s
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-8hbpc -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-8hbpc -- mysql -ppassword -e "show databases;": exit status 1 (108.460947ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-8hbpc -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-8hbpc -- mysql -ppassword -e "show databases;": exit status 1 (113.414802ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-8hbpc -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.35s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (12.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (12.70692049s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (12.71s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (13.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.328982843s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (13.33s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (14.9s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.896706171s)
--- PASS: TestImageBuild/serial/Setup (14.90s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.44s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube: (1.439978627s)
--- PASS: TestImageBuild/serial/NormalBuild (1.44s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.84s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.84s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.6s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.60s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.51s)

                                                
                                    
x
+
TestJSONOutput/start/Command (30.26s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (30.255366419s)
--- PASS: TestJSONOutput/start/Command (30.26s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.51s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.51s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.4s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.40s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.3s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (5.296035079s)
--- PASS: TestJSONOutput/stop/Command (5.30s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.861033ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"213132d7-9cc1-4661-9ee1-2d8e6b1a23f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2ca5bb60-0e52-47b9-a15a-aa2f5e479a25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19640"}}
	{"specversion":"1.0","id":"0a2cf6fb-6dcf-4fc8-aba6-fa3bf0030805","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f17c3ce8-c157-4db9-b5eb-50f7d6b39612","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19640-5268/kubeconfig"}}
	{"specversion":"1.0","id":"d2a2e566-711c-4be8-a138-51ee2173d49d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5268/.minikube"}}
	{"specversion":"1.0","id":"1a26b7b1-590d-4ede-9856-bc7c5d6f7527","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9ecf7ce6-2e54-48d4-860e-6bee29090796","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"49d2dd05-079b-439d-bc79-e32b1865dd1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (34.22s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.513995364s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (17.817951294s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.291224374s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (34.22s)

                                                
                                    
x
+
TestPause/serial/Start (29.34s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (29.337893697s)
--- PASS: TestPause/serial/Start (29.34s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.82s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (29.815120113s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.82s)

                                                
                                    
x
+
TestPause/serial/Pause (0.51s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.51s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.13s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (125.26221ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.13s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.41s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.41s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.54s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.54s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.59s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.593176788s)
--- PASS: TestPause/serial/DeletePaused (1.59s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (70s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.592395810 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.592395810 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (30.411708052s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (35.508054855s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (3.144286233s)
--- PASS: TestRunningBinaryUpgrade (70.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (49.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3893329007 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3893329007 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (14.659158843s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3893329007 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3893329007 -p minikube stop: (23.708961954s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (10.940491508s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (49.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.76s)

                                                
                                    
x
+
TestKubernetesUpgrade (317.34s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (31.397652856s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.316618985s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (72.073185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m16.679891018s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (61.849539ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-5268/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-5268/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (17.511286969s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.239557551s)
--- PASS: TestKubernetesUpgrade (317.34s)

                                                
                                    

Test skip (56/168)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.31.1/preload-exists 0
14 TestDownloadOnly/v1.31.1/cached-images 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Ingress 0
38 TestAddons/parallel/Olm 0
42 TestAddons/parallel/LocalPath 0
46 TestCertOptions 0
48 TestDockerFlags 0
49 TestForceSystemdFlag 0
50 TestForceSystemdEnv 0
51 TestDockerEnvContainerd 0
52 TestKVMDriverInstallOrUpdate 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
55 TestErrorSpam 0
64 TestFunctional/serial/CacheCmd 0
78 TestFunctional/parallel/MountCmd 0
101 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
102 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
103 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
105 TestFunctional/parallel/SSHCmd 0
106 TestFunctional/parallel/CpCmd 0
108 TestFunctional/parallel/FileSync 0
109 TestFunctional/parallel/CertSync 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
117 TestFunctional/parallel/ImageCommands 0
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0
126 TestGvisorAddon 0
127 TestMultiControlPlane 0
135 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
162 TestKicCustomNetwork 0
163 TestKicExistingNetwork 0
164 TestKicCustomSubnet 0
165 TestKicStaticIP 0
168 TestMountStart 0
169 TestMultiNode 0
170 TestNetworkPlugins 0
171 TestNoKubernetes 0
172 TestChangeNoneUser 0
183 TestPreload 0
184 TestScheduledStopWindows 0
185 TestScheduledStopUnix 0
186 TestSkaffold 0
189 TestStartStop/group/old-k8s-version 0.12
190 TestStartStop/group/newest-cni 0.12
191 TestStartStop/group/default-k8s-diff-port 0.12
192 TestStartStop/group/no-preload 0.12
193 TestStartStop/group/disable-driver-mounts 0.12
194 TestStartStop/group/embed-certs 0.12
195 TestInsufficientStorage 0
202 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:198: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:978: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1041: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1717: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1760: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1924: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1955: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:458: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:545: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:292: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2016: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.12s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.12s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard