Test Report: none_Linux 19672

                    
                      d6d2a37830b251a8a712eec07ee86a534797346d:2024-09-20:36297
                    
                

Test fail (1/167)

Order failed test Duration
33 TestAddons/parallel/Registry 72.82
x
+
TestAddons/parallel/Registry (72.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.63479ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-8c7tp" [be7ec7f6-7cec-4f63-bab2-8844fbb26f79] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003939642s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-9zk5q" [7bdaa858-4534-4dbd-b767-3de12e3d88ce] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003172952s
addons_test.go:338: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.088587714s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/09/20 16:56:19 [DEBUG] GET http://10.138.0.48:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:40853               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:44 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:44 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 20 Sep 24 16:44 UTC | 20 Sep 24 16:46 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 20 Sep 24 16:46 UTC | 20 Sep 24 16:47 UTC |
	|         | volcano --alsologtostderr -v=1       |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 20 Sep 24 16:56 UTC | 20 Sep 24 16:56 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 16:44:46
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 16:44:46.393717   19594 out.go:345] Setting OutFile to fd 1 ...
	I0920 16:44:46.393941   19594 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:44:46.393949   19594 out.go:358] Setting ErrFile to fd 2...
	I0920 16:44:46.393953   19594 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:44:46.394129   19594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8660/.minikube/bin
	I0920 16:44:46.394678   19594 out.go:352] Setting JSON to false
	I0920 16:44:46.395479   19594 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1638,"bootTime":1726849048,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 16:44:46.395576   19594 start.go:139] virtualization: kvm guest
	I0920 16:44:46.397621   19594 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0920 16:44:46.398910   19594 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19672-8660/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 16:44:46.398952   19594 notify.go:220] Checking for updates...
	I0920 16:44:46.398954   19594 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 16:44:46.400353   19594 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 16:44:46.401699   19594 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8660/kubeconfig
	I0920 16:44:46.402894   19594 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8660/.minikube
	I0920 16:44:46.404229   19594 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 16:44:46.405433   19594 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 16:44:46.406640   19594 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 16:44:46.416317   19594 out.go:177] * Using the none driver based on user configuration
	I0920 16:44:46.417622   19594 start.go:297] selected driver: none
	I0920 16:44:46.417633   19594 start.go:901] validating driver "none" against <nil>
	I0920 16:44:46.417643   19594 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 16:44:46.417665   19594 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0920 16:44:46.417942   19594 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0920 16:44:46.418410   19594 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 16:44:46.418612   19594 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 16:44:46.418636   19594 cni.go:84] Creating CNI manager for ""
	I0920 16:44:46.418686   19594 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 16:44:46.418693   19594 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 16:44:46.418741   19594 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 16:44:46.420382   19594 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0920 16:44:46.421858   19594 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/config.json ...
	I0920 16:44:46.421885   19594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/config.json: {Name:mkdf036dff907fb437264bef45587df8a3fa5ee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:46.422000   19594 start.go:360] acquireMachinesLock for minikube: {Name:mkdc49cc563151f6fcc0b1f78bca5c30c862e88d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 16:44:46.422031   19594 start.go:364] duration metric: took 18.715µs to acquireMachinesLock for "minikube"
	I0920 16:44:46.422047   19594 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 16:44:46.422126   19594 start.go:125] createHost starting for "" (driver="none")
	I0920 16:44:46.423597   19594 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0920 16:44:46.424767   19594 exec_runner.go:51] Run: systemctl --version
	I0920 16:44:46.427141   19594 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0920 16:44:46.427179   19594 client.go:168] LocalClient.Create starting
	I0920 16:44:46.427263   19594 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8660/.minikube/certs/ca.pem
	I0920 16:44:46.427310   19594 main.go:141] libmachine: Decoding PEM data...
	I0920 16:44:46.427327   19594 main.go:141] libmachine: Parsing certificate...
	I0920 16:44:46.427381   19594 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-8660/.minikube/certs/cert.pem
	I0920 16:44:46.427411   19594 main.go:141] libmachine: Decoding PEM data...
	I0920 16:44:46.427426   19594 main.go:141] libmachine: Parsing certificate...
	I0920 16:44:46.427751   19594 client.go:171] duration metric: took 560.863µs to LocalClient.Create
	I0920 16:44:46.427772   19594 start.go:167] duration metric: took 632.689µs to libmachine.API.Create "minikube"
	I0920 16:44:46.427778   19594 start.go:293] postStartSetup for "minikube" (driver="none")
	I0920 16:44:46.427827   19594 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 16:44:46.427862   19594 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 16:44:46.436479   19594 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 16:44:46.436498   19594 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 16:44:46.436506   19594 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 16:44:46.438279   19594 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0920 16:44:46.439554   19594 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8660/.minikube/addons for local assets ...
	I0920 16:44:46.439621   19594 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-8660/.minikube/files for local assets ...
	I0920 16:44:46.439647   19594 start.go:296] duration metric: took 11.862163ms for postStartSetup
	I0920 16:44:46.440229   19594 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/config.json ...
	I0920 16:44:46.440373   19594 start.go:128] duration metric: took 18.237035ms to createHost
	I0920 16:44:46.440390   19594 start.go:83] releasing machines lock for "minikube", held for 18.348412ms
	I0920 16:44:46.440844   19594 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 16:44:46.440938   19594 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0920 16:44:46.443952   19594 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 16:44:46.444001   19594 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 16:44:46.452933   19594 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 16:44:46.452952   19594 start.go:495] detecting cgroup driver to use...
	I0920 16:44:46.452977   19594 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 16:44:46.453070   19594 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 16:44:46.471937   19594 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0920 16:44:46.480822   19594 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 16:44:46.489284   19594 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 16:44:46.489332   19594 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 16:44:46.498803   19594 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 16:44:46.507255   19594 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 16:44:46.515367   19594 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 16:44:46.523540   19594 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 16:44:46.532138   19594 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 16:44:46.541201   19594 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 16:44:46.550322   19594 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 16:44:46.559318   19594 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 16:44:46.566350   19594 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 16:44:46.573296   19594 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0920 16:44:46.788249   19594 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0920 16:44:46.854163   19594 start.go:495] detecting cgroup driver to use...
	I0920 16:44:46.854283   19594 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 16:44:46.854408   19594 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 16:44:46.872665   19594 exec_runner.go:51] Run: which cri-dockerd
	I0920 16:44:46.873538   19594 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 16:44:46.881174   19594 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0920 16:44:46.881196   19594 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0920 16:44:46.881225   19594 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0920 16:44:46.888485   19594 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0920 16:44:46.888637   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1499762060 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0920 16:44:46.897036   19594 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0920 16:44:47.108294   19594 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0920 16:44:47.337306   19594 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 16:44:47.337454   19594 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0920 16:44:47.337468   19594 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0920 16:44:47.337513   19594 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0920 16:44:47.346330   19594 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0920 16:44:47.346453   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1209004492 /etc/docker/daemon.json
	I0920 16:44:47.354106   19594 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0920 16:44:47.570786   19594 exec_runner.go:51] Run: sudo systemctl restart docker
	I0920 16:44:47.868640   19594 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0920 16:44:47.879912   19594 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0920 16:44:47.896120   19594 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 16:44:47.908291   19594 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0920 16:44:48.118734   19594 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0920 16:44:48.337162   19594 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0920 16:44:48.561029   19594 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0920 16:44:48.574784   19594 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 16:44:48.585981   19594 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0920 16:44:48.783614   19594 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0920 16:44:48.849439   19594 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0920 16:44:48.849492   19594 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0920 16:44:48.850908   19594 start.go:563] Will wait 60s for crictl version
	I0920 16:44:48.850940   19594 exec_runner.go:51] Run: which crictl
	I0920 16:44:48.851766   19594 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0920 16:44:48.879685   19594 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0920 16:44:48.879738   19594 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0920 16:44:48.900589   19594 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0920 16:44:48.923554   19594 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0920 16:44:48.923627   19594 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0920 16:44:48.926355   19594 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0920 16:44:48.927659   19594 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 16:44:48.927757   19594 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 16:44:48.927766   19594 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.1 docker true true} ...
	I0920 16:44:48.927844   19594 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0920 16:44:48.927885   19594 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0920 16:44:48.976977   19594 cni.go:84] Creating CNI manager for ""
	I0920 16:44:48.976999   19594 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 16:44:48.977009   19594 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 16:44:48.977029   19594 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 16:44:48.977150   19594 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.138.0.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-2"
	  kubeletExtraArgs:
	    node-ip: 10.138.0.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 16:44:48.977202   19594 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 16:44:48.986493   19594 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 16:44:48.986539   19594 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 16:44:48.995061   19594 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 16:44:48.995115   19594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8660/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 16:44:48.995061   19594 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0920 16:44:48.995186   19594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8660/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 16:44:48.995059   19594 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0920 16:44:48.995350   19594 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0920 16:44:49.008481   19594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8660/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 16:44:49.045054   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1821831228 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 16:44:49.047377   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube546789139 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 16:44:49.077918   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube30309066 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 16:44:49.142104   19594 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 16:44:49.150392   19594 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0920 16:44:49.150412   19594 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0920 16:44:49.150444   19594 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0920 16:44:49.158035   19594 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0920 16:44:49.158155   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3884025451 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0920 16:44:49.165647   19594 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0920 16:44:49.165668   19594 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0920 16:44:49.165700   19594 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0920 16:44:49.172961   19594 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 16:44:49.173089   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4235946222 /lib/systemd/system/kubelet.service
	I0920 16:44:49.180581   19594 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0920 16:44:49.180752   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4213183433 /var/tmp/minikube/kubeadm.yaml.new
	I0920 16:44:49.188521   19594 exec_runner.go:51] Run: grep 10.138.0.48	control-plane.minikube.internal$ /etc/hosts
	I0920 16:44:49.189861   19594 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0920 16:44:49.408391   19594 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0920 16:44:49.422235   19594 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube for IP: 10.138.0.48
	I0920 16:44:49.422253   19594 certs.go:194] generating shared ca certs ...
	I0920 16:44:49.422270   19594 certs.go:226] acquiring lock for ca certs: {Name:mk1d8899ce2a87028cac7a49ff26964e9bc72225 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:49.422384   19594 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-8660/.minikube/ca.key
	I0920 16:44:49.422423   19594 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-8660/.minikube/proxy-client-ca.key
	I0920 16:44:49.422433   19594 certs.go:256] generating profile certs ...
	I0920 16:44:49.422481   19594 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/client.key
	I0920 16:44:49.422494   19594 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/client.crt with IP's: []
	I0920 16:44:49.875761   19594 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/client.crt ...
	I0920 16:44:49.875789   19594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/client.crt: {Name:mk7612666dff1775ca3525ead0c65436e5c520d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:49.875930   19594 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/client.key ...
	I0920 16:44:49.875940   19594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/client.key: {Name:mk449a238ae36687c25ac2321fcfdc974bee5fb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:49.876004   19594 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/apiserver.key.35c0634a
	I0920 16:44:49.876019   19594 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
	I0920 16:44:49.982701   19594 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
	I0920 16:44:49.982729   19594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mk45d79ab0143733d8a3776acb94f00bb45ef4af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:49.982849   19594 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/apiserver.key.35c0634a ...
	I0920 16:44:49.982858   19594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mk353fb02d782506ee48ad3d8d88d8ea9ab1cfcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:49.982908   19594 certs.go:381] copying /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/apiserver.crt
	I0920 16:44:49.982979   19594 certs.go:385] copying /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/apiserver.key
	I0920 16:44:49.983030   19594 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/proxy-client.key
	I0920 16:44:49.983043   19594 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0920 16:44:50.115007   19594 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/proxy-client.crt ...
	I0920 16:44:50.115037   19594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/proxy-client.crt: {Name:mkcda2f3effe768f01706934a20a050b37960bec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.115160   19594 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/proxy-client.key ...
	I0920 16:44:50.115169   19594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/proxy-client.key: {Name:mk2ed70961549e0b26ca0fe6a6bc0e06bcde52c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:44:50.115306   19594 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8660/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 16:44:50.115336   19594 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8660/.minikube/certs/ca.pem (1078 bytes)
	I0920 16:44:50.115359   19594 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8660/.minikube/certs/cert.pem (1123 bytes)
	I0920 16:44:50.115380   19594 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-8660/.minikube/certs/key.pem (1679 bytes)
	I0920 16:44:50.115958   19594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8660/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 16:44:50.116084   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3122914844 /var/lib/minikube/certs/ca.crt
	I0920 16:44:50.124372   19594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8660/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 16:44:50.124473   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1461209255 /var/lib/minikube/certs/ca.key
	I0920 16:44:50.132113   19594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8660/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 16:44:50.132206   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2552536999 /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 16:44:50.139860   19594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8660/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 16:44:50.139953   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1928580747 /var/lib/minikube/certs/proxy-client-ca.key
	I0920 16:44:50.148101   19594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0920 16:44:50.148210   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube11191036 /var/lib/minikube/certs/apiserver.crt
	I0920 16:44:50.157255   19594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 16:44:50.157351   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3128518087 /var/lib/minikube/certs/apiserver.key
	I0920 16:44:50.164291   19594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 16:44:50.164387   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2355093440 /var/lib/minikube/certs/proxy-client.crt
	I0920 16:44:50.171503   19594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8660/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 16:44:50.171610   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube726170504 /var/lib/minikube/certs/proxy-client.key
	I0920 16:44:50.179676   19594 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0920 16:44:50.179691   19594 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0920 16:44:50.179717   19594 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0920 16:44:50.188474   19594 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-8660/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 16:44:50.188607   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube734945398 /usr/share/ca-certificates/minikubeCA.pem
	I0920 16:44:50.196039   19594 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 16:44:50.196146   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4000014679 /var/lib/minikube/kubeconfig
	I0920 16:44:50.203793   19594 exec_runner.go:51] Run: openssl version
	I0920 16:44:50.206540   19594 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 16:44:50.214494   19594 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 16:44:50.215714   19594 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 20 16:44 /usr/share/ca-certificates/minikubeCA.pem
	I0920 16:44:50.215752   19594 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 16:44:50.218612   19594 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 16:44:50.226881   19594 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 16:44:50.227906   19594 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 16:44:50.227939   19594 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 16:44:50.228089   19594 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 16:44:50.243134   19594 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 16:44:50.251137   19594 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 16:44:50.258315   19594 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0920 16:44:50.279098   19594 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 16:44:50.288132   19594 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 16:44:50.288150   19594 kubeadm.go:157] found existing configuration files:
	
	I0920 16:44:50.288196   19594 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 16:44:50.296418   19594 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 16:44:50.296482   19594 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 16:44:50.303793   19594 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 16:44:50.311104   19594 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 16:44:50.311154   19594 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 16:44:50.318081   19594 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 16:44:50.325186   19594 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 16:44:50.325226   19594 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 16:44:50.332835   19594 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 16:44:50.340494   19594 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 16:44:50.340541   19594 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 16:44:50.347202   19594 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 16:44:50.377240   19594 kubeadm.go:310] W0920 16:44:50.377143   20482 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 16:44:50.377745   19594 kubeadm.go:310] W0920 16:44:50.377700   20482 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 16:44:50.379209   19594 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 16:44:50.379250   19594 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 16:44:50.466808   19594 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 16:44:50.466918   19594 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 16:44:50.466930   19594 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 16:44:50.466935   19594 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 16:44:50.476447   19594 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 16:44:50.479002   19594 out.go:235]   - Generating certificates and keys ...
	I0920 16:44:50.479047   19594 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 16:44:50.479083   19594 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 16:44:50.769741   19594 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 16:44:50.960520   19594 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 16:44:51.096069   19594 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 16:44:51.202338   19594 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 16:44:51.345509   19594 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 16:44:51.345590   19594 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0920 16:44:51.427893   19594 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 16:44:51.428064   19594 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0920 16:44:51.574477   19594 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 16:44:51.780289   19594 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 16:44:51.834279   19594 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 16:44:51.834403   19594 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 16:44:51.980919   19594 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 16:44:52.062315   19594 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 16:44:52.298212   19594 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 16:44:52.403734   19594 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 16:44:52.793690   19594 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 16:44:52.794265   19594 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 16:44:52.796467   19594 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 16:44:52.798456   19594 out.go:235]   - Booting up control plane ...
	I0920 16:44:52.798485   19594 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 16:44:52.798506   19594 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 16:44:52.798918   19594 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 16:44:52.820623   19594 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 16:44:52.824924   19594 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 16:44:52.824949   19594 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 16:44:53.034858   19594 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 16:44:53.034884   19594 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 16:44:53.536372   19594 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.523924ms
	I0920 16:44:53.536398   19594 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 16:44:57.538376   19594 kubeadm.go:310] [api-check] The API server is healthy after 4.001971734s
	I0920 16:44:57.549191   19594 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 16:44:57.558265   19594 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 16:44:57.573022   19594 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 16:44:57.573048   19594 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 16:44:57.579399   19594 kubeadm.go:310] [bootstrap-token] Using token: hugp6m.tyvuqbgbnvgnovg0
	I0920 16:44:57.580836   19594 out.go:235]   - Configuring RBAC rules ...
	I0920 16:44:57.580860   19594 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 16:44:57.583505   19594 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 16:44:57.588379   19594 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 16:44:57.590589   19594 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 16:44:57.592844   19594 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 16:44:57.595958   19594 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 16:44:57.944268   19594 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 16:44:58.364598   19594 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 16:44:58.943744   19594 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 16:44:58.944723   19594 kubeadm.go:310] 
	I0920 16:44:58.944748   19594 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 16:44:58.944753   19594 kubeadm.go:310] 
	I0920 16:44:58.944757   19594 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 16:44:58.944761   19594 kubeadm.go:310] 
	I0920 16:44:58.944764   19594 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 16:44:58.944768   19594 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 16:44:58.944771   19594 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 16:44:58.944775   19594 kubeadm.go:310] 
	I0920 16:44:58.944778   19594 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 16:44:58.944782   19594 kubeadm.go:310] 
	I0920 16:44:58.944785   19594 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 16:44:58.944789   19594 kubeadm.go:310] 
	I0920 16:44:58.944793   19594 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 16:44:58.944797   19594 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 16:44:58.944800   19594 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 16:44:58.944804   19594 kubeadm.go:310] 
	I0920 16:44:58.944807   19594 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 16:44:58.944811   19594 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 16:44:58.944814   19594 kubeadm.go:310] 
	I0920 16:44:58.944817   19594 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hugp6m.tyvuqbgbnvgnovg0 \
	I0920 16:44:58.944822   19594 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa5ac8eb105ac186d25174573bbce63b062ef4a25f52bd5bc8e84536a951a851 \
	I0920 16:44:58.944825   19594 kubeadm.go:310] 	--control-plane 
	I0920 16:44:58.944829   19594 kubeadm.go:310] 
	I0920 16:44:58.944833   19594 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 16:44:58.944837   19594 kubeadm.go:310] 
	I0920 16:44:58.944841   19594 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hugp6m.tyvuqbgbnvgnovg0 \
	I0920 16:44:58.944845   19594 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fa5ac8eb105ac186d25174573bbce63b062ef4a25f52bd5bc8e84536a951a851 
	I0920 16:44:58.947708   19594 cni.go:84] Creating CNI manager for ""
	I0920 16:44:58.947736   19594 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 16:44:58.949444   19594 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 16:44:58.950712   19594 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0920 16:44:58.961932   19594 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 16:44:58.962056   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube830594341 /etc/cni/net.d/1-k8s.conflist
	I0920 16:44:58.971138   19594 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 16:44:58.971219   19594 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:44:58.971249   19594 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_09_20T16_44_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0920 16:44:58.980373   19594 ops.go:34] apiserver oom_adj: -16
	I0920 16:44:59.045655   19594 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:44:59.545788   19594 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:00.046704   19594 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:00.545700   19594 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:01.046153   19594 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:01.546306   19594 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:02.046406   19594 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:02.546457   19594 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:03.046102   19594 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 16:45:03.106560   19594 kubeadm.go:1113] duration metric: took 4.135387262s to wait for elevateKubeSystemPrivileges
	I0920 16:45:03.106596   19594 kubeadm.go:394] duration metric: took 12.878657753s to StartCluster
	I0920 16:45:03.106619   19594 settings.go:142] acquiring lock: {Name:mk6ada6352ea5bdecb1c79df6ac47b0dadd41593 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:45:03.106684   19594 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-8660/kubeconfig
	I0920 16:45:03.107232   19594 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-8660/kubeconfig: {Name:mk3d4a06a73fedada4259eb022305dcbcccbad51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 16:45:03.107438   19594 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 16:45:03.107511   19594 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 16:45:03.107636   19594 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0920 16:45:03.107648   19594 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0920 16:45:03.107656   19594 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0920 16:45:03.107666   19594 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0920 16:45:03.107672   19594 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0920 16:45:03.107686   19594 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0920 16:45:03.107693   19594 host.go:66] Checking if "minikube" exists ...
	I0920 16:45:03.107696   19594 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0920 16:45:03.107700   19594 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0920 16:45:03.107710   19594 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 16:45:03.107725   19594 host.go:66] Checking if "minikube" exists ...
	I0920 16:45:03.107728   19594 host.go:66] Checking if "minikube" exists ...
	I0920 16:45:03.107753   19594 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0920 16:45:03.107767   19594 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0920 16:45:03.107774   19594 addons.go:69] Setting registry=true in profile "minikube"
	I0920 16:45:03.107788   19594 addons.go:234] Setting addon registry=true in "minikube"
	I0920 16:45:03.107793   19594 host.go:66] Checking if "minikube" exists ...
	I0920 16:45:03.107812   19594 host.go:66] Checking if "minikube" exists ...
	I0920 16:45:03.108060   19594 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0920 16:45:03.108116   19594 mustload.go:65] Loading cluster: minikube
	I0920 16:45:03.107657   19594 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0920 16:45:03.108329   19594 addons.go:69] Setting volcano=true in profile "minikube"
	I0920 16:45:03.108340   19594 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0920 16:45:03.108345   19594 addons.go:234] Setting addon volcano=true in "minikube"
	I0920 16:45:03.108351   19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 16:45:03.108361   19594 host.go:66] Checking if "minikube" exists ...
	I0920 16:45:03.108364   19594 api_server.go:166] Checking apiserver status ...
	I0920 16:45:03.108363   19594 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 16:45:03.108378   19594 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0920 16:45:03.108378   19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 16:45:03.108388   19594 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0920 16:45:03.108391   19594 api_server.go:166] Checking apiserver status ...
	I0920 16:45:03.108391   19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 16:45:03.108398   19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 16:45:03.108399   19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 16:45:03.108403   19594 api_server.go:166] Checking apiserver status ...
	I0920 16:45:03.108408   19594 api_server.go:166] Checking apiserver status ...
	I0920 16:45:03.108409   19594 host.go:66] Checking if "minikube" exists ...
	I0920 16:45:03.108421   19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 16:45:03.108432   19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 16:45:03.107672   19594 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0920 16:45:03.108472   19594 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0920 16:45:03.108496   19594 host.go:66] Checking if "minikube" exists ...
	I0920 16:45:03.108577   19594 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0920 16:45:03.108595   19594 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0920 16:45:03.107636   19594 addons.go:69] Setting yakd=true in profile "minikube"
	I0920 16:45:03.108871   19594 addons.go:234] Setting addon yakd=true in "minikube"
	I0920 16:45:03.108900   19594 host.go:66] Checking if "minikube" exists ...
	I0920 16:45:03.108980   19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 16:45:03.108993   19594 api_server.go:166] Checking apiserver status ...
	I0920 16:45:03.109021   19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 16:45:03.109084   19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 16:45:03.109091   19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 16:45:03.109098   19594 api_server.go:166] Checking apiserver status ...
	I0920 16:45:03.109104   19594 api_server.go:166] Checking apiserver status ...
	I0920 16:45:03.109127   19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 16:45:03.109133   19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 16:45:03.109191   19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 16:45:03.109201   19594 api_server.go:166] Checking apiserver status ...
	I0920 16:45:03.109232   19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 16:45:03.109373   19594 out.go:177] * Configuring local host environment ...
	I0920 16:45:03.108432   19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 16:45:03.109587   19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 16:45:03.108316   19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 16:45:03.109671   19594 api_server.go:166] Checking apiserver status ...
	I0920 16:45:03.108368   19594 host.go:66] Checking if "minikube" exists ...
	I0920 16:45:03.109733   19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 16:45:03.109434   19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 16:45:03.109752   19594 api_server.go:166] Checking apiserver status ...
	I0920 16:45:03.109784   19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 16:45:03.109960   19594 api_server.go:166] Checking apiserver status ...
	I0920 16:45:03.109999   19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0920 16:45:03.111148   19594 out.go:270] * 
	W0920 16:45:03.111167   19594 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0920 16:45:03.111175   19594 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0920 16:45:03.111182   19594 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0920 16:45:03.111187   19594 out.go:270] * 
	W0920 16:45:03.111224   19594 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0920 16:45:03.111230   19594 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0920 16:45:03.111236   19594 out.go:270] * 
	W0920 16:45:03.111260   19594 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0920 16:45:03.111268   19594 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0920 16:45:03.111273   19594 out.go:270] * 
	W0920 16:45:03.111279   19594 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0920 16:45:03.111304   19594 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 16:45:03.112660   19594 out.go:177] * Verifying Kubernetes components...
	I0920 16:45:03.114100   19594 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0920 16:45:03.128407   19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
	I0920 16:45:03.129533   19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
	I0920 16:45:03.129679   19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
	I0920 16:45:03.129737   19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
	I0920 16:45:03.137332   19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 16:45:03.137361   19594 api_server.go:166] Checking apiserver status ...
	I0920 16:45:03.137400   19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 16:45:03.144198   19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
	I0920 16:45:03.137333   19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 16:45:03.144496   19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
	I0920 16:45:03.144511   19594 api_server.go:166] Checking apiserver status ...
	I0920 16:45:03.144548   19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 16:45:03.145097   19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
	I0920 16:45:03.149474   19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
	I0920 16:45:03.149525   19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
	I0920 16:45:03.151612   19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
	I0920 16:45:03.151658   19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
	I0920 16:45:03.151961   19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
	I0920 16:45:03.152014   19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
	I0920 16:45:03.161595   19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
	I0920 16:45:03.163647   19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
	I0920 16:45:03.163711   19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
	I0920 16:45:03.166911   19594 api_server.go:204] freezer state: "THAWED"
	I0920 16:45:03.166929   19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
	I0920 16:45:03.166940   19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 16:45:03.166978   19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
	I0920 16:45:03.167274   19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
	I0920 16:45:03.170530   19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
	I0920 16:45:03.170755   19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
	I0920 16:45:03.172231   19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
	I0920 16:45:03.172281   19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
	I0920 16:45:03.173098   19594 api_server.go:204] freezer state: "THAWED"
	I0920 16:45:03.173121   19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 16:45:03.173432   19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 16:45:03.175482   19594 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 16:45:03.176593   19594 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 16:45:03.176626   19594 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 16:45:03.176972   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2807691950 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 16:45:03.178353   19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
	I0920 16:45:03.178404   19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
	I0920 16:45:03.178677   19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
	I0920 16:45:03.179014   19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 16:45:03.180592   19594 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 16:45:03.182132   19594 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 16:45:03.183456   19594 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 16:45:03.183486   19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 16:45:03.183621   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube616937658 /etc/kubernetes/addons/registry-rc.yaml
	I0920 16:45:03.185935   19594 api_server.go:204] freezer state: "THAWED"
	I0920 16:45:03.185964   19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 16:45:03.186406   19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
	I0920 16:45:03.186462   19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
	I0920 16:45:03.190576   19594 api_server.go:204] freezer state: "THAWED"
	I0920 16:45:03.190598   19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 16:45:03.191231   19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
	I0920 16:45:03.191283   19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
	I0920 16:45:03.191395   19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
	I0920 16:45:03.193591   19594 api_server.go:204] freezer state: "THAWED"
	I0920 16:45:03.193619   19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 16:45:03.198306   19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
	I0920 16:45:03.198424   19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
	I0920 16:45:03.198454   19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
	I0920 16:45:03.198475   19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
	I0920 16:45:03.198982   19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 16:45:03.199459   19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 16:45:03.200011   19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 16:45:03.201546   19594 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 16:45:03.202562   19594 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 16:45:03.202605   19594 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 16:45:03.202648   19594 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 16:45:03.202676   19594 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 16:45:03.203621   19594 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 16:45:03.203644   19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 16:45:03.203750   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3074728425 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 16:45:03.203851   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube423314052 /etc/kubernetes/addons/registry-svc.yaml
	I0920 16:45:03.203932   19594 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 16:45:03.203950   19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 16:45:03.204262   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3919251867 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 16:45:03.204553   19594 api_server.go:204] freezer state: "THAWED"
	I0920 16:45:03.204572   19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 16:45:03.204604   19594 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 16:45:03.204632   19594 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 16:45:03.204761   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3081934045 /etc/kubernetes/addons/ig-namespace.yaml
	I0920 16:45:03.204801   19594 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 16:45:03.204862   19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 16:45:03.205051   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4291153230 /etc/kubernetes/addons/deployment.yaml
	I0920 16:45:03.205440   19594 api_server.go:204] freezer state: "THAWED"
	I0920 16:45:03.205456   19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 16:45:03.205771   19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
	I0920 16:45:03.205811   19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
	I0920 16:45:03.206307   19594 api_server.go:204] freezer state: "THAWED"
	I0920 16:45:03.206329   19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 16:45:03.206716   19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
	I0920 16:45:03.206756   19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
	I0920 16:45:03.209671   19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 16:45:03.210850   19594 api_server.go:204] freezer state: "THAWED"
	I0920 16:45:03.210873   19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 16:45:03.211905   19594 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 16:45:03.214682   19594 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 16:45:03.214708   19594 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0920 16:45:03.214716   19594 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 16:45:03.214753   19594 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 16:45:03.214918   19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 16:45:03.215325   19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 16:45:03.215737   19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 16:45:03.216504   19594 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0920 16:45:03.216539   19594 host.go:66] Checking if "minikube" exists ...
	I0920 16:45:03.217121   19594 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0920 16:45:03.217134   19594 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 16:45:03.217701   19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 16:45:03.217722   19594 api_server.go:166] Checking apiserver status ...
	I0920 16:45:03.217757   19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 16:45:03.218192   19594 api_server.go:204] freezer state: "THAWED"
	I0920 16:45:03.218215   19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 16:45:03.219955   19594 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 16:45:03.220042   19594 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0920 16:45:03.221740   19594 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 16:45:03.222565   19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 16:45:03.222961   19594 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0920 16:45:03.223741   19594 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0920 16:45:03.223786   19594 host.go:66] Checking if "minikube" exists ...
	I0920 16:45:03.224612   19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 16:45:03.224628   19594 api_server.go:166] Checking apiserver status ...
	I0920 16:45:03.224673   19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 16:45:03.224851   19594 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 16:45:03.225641   19594 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 16:45:03.225675   19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0920 16:45:03.226211   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1524924394 /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 16:45:03.227236   19594 api_server.go:204] freezer state: "THAWED"
	I0920 16:45:03.227257   19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 16:45:03.232409   19594 api_server.go:204] freezer state: "THAWED"
	I0920 16:45:03.232431   19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 16:45:03.234098   19594 api_server.go:204] freezer state: "THAWED"
	I0920 16:45:03.234116   19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 16:45:03.236219   19594 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 16:45:03.236498   19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 16:45:03.238243   19594 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 16:45:03.238288   19594 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 16:45:03.239492   19594 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 16:45:03.239514   19594 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 16:45:03.239617   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2886266894 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 16:45:03.241029   19594 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 16:45:03.242249   19594 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 16:45:03.243931   19594 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 16:45:03.243960   19594 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 16:45:03.244084   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube915938413 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 16:45:03.244228   19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 16:45:03.244251   19594 host.go:66] Checking if "minikube" exists ...
	I0920 16:45:03.244828   19594 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 16:45:03.244862   19594 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 16:45:03.244995   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2077044191 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 16:45:03.246319   19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 16:45:03.246699   19594 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 16:45:03.246726   19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 16:45:03.246996   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1933288019 /etc/kubernetes/addons/registry-proxy.yaml
	I0920 16:45:03.250884   19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 16:45:03.250928   19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 16:45:03.250969   19594 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 16:45:03.251039   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1074630350 /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 16:45:03.250989   19594 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 16:45:03.251124   19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 16:45:03.251184   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2884370248 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 16:45:03.253856   19594 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 16:45:03.255651   19594 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 16:45:03.255690   19594 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 16:45:03.255992   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2248737722 /etc/kubernetes/addons/yakd-ns.yaml
	I0920 16:45:03.258614   19594 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 16:45:03.262153   19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
	I0920 16:45:03.265743   19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 16:45:03.267470   19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 16:45:03.274399   19594 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 16:45:03.274438   19594 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 16:45:03.274576   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1949191535 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 16:45:03.276039   19594 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 16:45:03.276067   19594 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 16:45:03.276180   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube229908915 /etc/kubernetes/addons/ig-role.yaml
	I0920 16:45:03.279548   19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
	I0920 16:45:03.286583   19594 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 16:45:03.286613   19594 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 16:45:03.286669   19594 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 16:45:03.286694   19594 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 16:45:03.286733   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube838362008 /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 16:45:03.286817   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube974948099 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 16:45:03.287161   19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 16:45:03.291057   19594 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 16:45:03.291081   19594 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 16:45:03.291165   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1813447522 /etc/kubernetes/addons/yakd-sa.yaml
	I0920 16:45:03.292240   19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
	I0920 16:45:03.292314   19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
	I0920 16:45:03.294936   19594 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 16:45:03.294967   19594 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 16:45:03.295083   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4146359657 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 16:45:03.314422   19594 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 16:45:03.314457   19594 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 16:45:03.314601   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2980362868 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 16:45:03.316517   19594 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 16:45:03.316545   19594 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 16:45:03.316744   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3230597966 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 16:45:03.320041   19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
	I0920 16:45:03.320097   19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
	I0920 16:45:03.320263   19594 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 16:45:03.320281   19594 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 16:45:03.320383   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3702161271 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 16:45:03.325646   19594 api_server.go:204] freezer state: "THAWED"
	I0920 16:45:03.325673   19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 16:45:03.327798   19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 16:45:03.328339   19594 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 16:45:03.328367   19594 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 16:45:03.328478   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3876711540 /etc/kubernetes/addons/yakd-crb.yaml
	I0920 16:45:03.337135   19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 16:45:03.337182   19594 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 16:45:03.337201   19594 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0920 16:45:03.337212   19594 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0920 16:45:03.337246   19594 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0920 16:45:03.345570   19594 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 16:45:03.345596   19594 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 16:45:03.345601   19594 api_server.go:204] freezer state: "THAWED"
	I0920 16:45:03.345617   19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 16:45:03.345729   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3361776696 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 16:45:03.350410   19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 16:45:03.352357   19594 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 16:45:03.353794   19594 out.go:177]   - Using image docker.io/busybox:stable
	I0920 16:45:03.355184   19594 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 16:45:03.355212   19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 16:45:03.355322   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2291821422 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 16:45:03.356907   19594 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 16:45:03.356931   19594 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 16:45:03.357054   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1786773614 /etc/kubernetes/addons/yakd-svc.yaml
	I0920 16:45:03.360419   19594 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 16:45:03.360446   19594 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 16:45:03.360562   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2181556984 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 16:45:03.375181   19594 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 16:45:03.375219   19594 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 16:45:03.376145   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1693979917 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 16:45:03.389466   19594 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 16:45:03.389492   19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 16:45:03.389625   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube496196015 /etc/kubernetes/addons/yakd-dp.yaml
	I0920 16:45:03.392762   19594 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 16:45:03.392895   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2180015718 /etc/kubernetes/addons/storageclass.yaml
	I0920 16:45:03.402530   19594 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 16:45:03.402569   19594 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 16:45:03.402685   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2896344940 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 16:45:03.404082   19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 16:45:03.408876   19594 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 16:45:03.408904   19594 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 16:45:03.409017   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1983332709 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 16:45:03.422215   19594 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 16:45:03.422249   19594 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 16:45:03.422380   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3728481436 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 16:45:03.424108   19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 16:45:03.426702   19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 16:45:03.449765   19594 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 16:45:03.449814   19594 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 16:45:03.449956   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3366167828 /etc/kubernetes/addons/ig-crd.yaml
	I0920 16:45:03.456917   19594 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 16:45:03.456950   19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 16:45:03.457068   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3167440687 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 16:45:03.499198   19594 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 16:45:03.499229   19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 16:45:03.499353   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube23259019 /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 16:45:03.514607   19594 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 16:45:03.514652   19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 16:45:03.514918   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3822892204 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 16:45:03.528636   19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 16:45:03.530894   19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 16:45:03.556312   19594 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 16:45:03.556353   19594 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 16:45:03.556491   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2871550575 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 16:45:03.558663   19594 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0920 16:45:03.576752   19594 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 16:45:03.576795   19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 16:45:03.576946   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube284233598 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 16:45:03.588142   19594 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
	I0920 16:45:03.593861   19594 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
	I0920 16:45:03.593886   19594 node_ready.go:38] duration metric: took 5.712961ms for node "ubuntu-20-agent-2" to be "Ready" ...
	I0920 16:45:03.593898   19594 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 16:45:03.605499   19594 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 16:45:03.605533   19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 16:45:03.605660   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1184704723 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 16:45:03.607548   19594 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:03.630484   19594 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 16:45:03.630523   19594 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 16:45:03.630658   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1796071118 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 16:45:03.684533   19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 16:45:03.771271   19594 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0920 16:45:03.878624   19594 addons.go:475] Verifying addon registry=true in "minikube"
	I0920 16:45:03.880660   19594 out.go:177] * Verifying registry addon...
	I0920 16:45:03.895264   19594 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 16:45:03.899834   19594 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 16:45:03.899854   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:04.190839   19594 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0920 16:45:04.281636   19594 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0920 16:45:04.377324   19594 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.090117959s)
	I0920 16:45:04.398333   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:04.652538   19594 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.123836851s)
	I0920 16:45:04.720823   19594 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.294055016s)
	I0920 16:45:04.722449   19594 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0920 16:45:04.900007   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:05.132507   19594 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.601534509s)
	W0920 16:45:05.132549   19594 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 16:45:05.132574   19594 retry.go:31] will retry after 228.109144ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 16:45:05.361287   19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 16:45:05.399585   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:05.613663   19594 pod_ready.go:103] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:05.900461   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:06.084318   19594 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.399723367s)
	I0920 16:45:06.084356   19594 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0920 16:45:06.086527   19594 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 16:45:06.088544   19594 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 16:45:06.110127   19594 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 16:45:06.110149   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:06.275138   19594 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.007630063s)
	I0920 16:45:06.399090   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:06.594312   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:06.900131   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:07.093811   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:07.398807   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:07.593193   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:07.899396   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:08.093687   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:08.113285   19594 pod_ready.go:103] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:08.167454   19594 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.806115806s)
	I0920 16:45:08.399265   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:08.593705   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:08.899868   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:09.093430   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:09.399923   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:09.594584   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:09.898987   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:10.094057   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:10.112185   19594 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:10.112205   19594 pod_ready.go:82] duration metric: took 6.504622461s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:10.112216   19594 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:10.268818   19594 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 16:45:10.268963   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3892026705 /var/lib/minikube/google_application_credentials.json
	I0920 16:45:10.278568   19594 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 16:45:10.278692   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2117278926 /var/lib/minikube/google_cloud_project
	I0920 16:45:10.287855   19594 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0920 16:45:10.287906   19594 host.go:66] Checking if "minikube" exists ...
	I0920 16:45:10.288447   19594 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 16:45:10.288466   19594 api_server.go:166] Checking apiserver status ...
	I0920 16:45:10.288498   19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 16:45:10.304993   19594 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/20893/cgroup
	I0920 16:45:10.316059   19594 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa"
	I0920 16:45:10.316127   19594 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/8b1d9d632055c4c35bf6631f68002668288c5a0b67fa2ea0a28846ee1f7e67aa/freezer.state
	I0920 16:45:10.325407   19594 api_server.go:204] freezer state: "THAWED"
	I0920 16:45:10.325436   19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 16:45:10.329684   19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 16:45:10.329745   19594 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 16:45:10.393165   19594 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 16:45:10.399514   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:10.551098   19594 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 16:45:10.592758   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:10.614710   19594 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 16:45:10.614835   19594 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 16:45:10.615001   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2249469409 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 16:45:10.625062   19594 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 16:45:10.625091   19594 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 16:45:10.625196   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2737502880 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 16:45:10.635460   19594 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 16:45:10.635498   19594 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 16:45:10.635608   19594 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2512882713 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 16:45:10.643485   19594 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 16:45:10.899580   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:10.994446   19594 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0920 16:45:10.996314   19594 out.go:177] * Verifying gcp-auth addon...
	I0920 16:45:10.998484   19594 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 16:45:11.001204   19594 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 16:45:11.105022   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:11.399857   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:11.592766   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:11.899223   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:12.092219   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:12.117125   19594 pod_ready.go:103] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"False"
	I0920 16:45:12.399320   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:12.593212   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:12.899515   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:13.093335   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:13.398522   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:13.603631   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:13.618247   19594 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:13.618269   19594 pod_ready.go:82] duration metric: took 3.506046375s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:13.618281   19594 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:13.622810   19594 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:13.622831   19594 pod_ready.go:82] duration metric: took 4.542427ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:13.622843   19594 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4z8bv" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:13.627109   19594 pod_ready.go:93] pod "kube-proxy-4z8bv" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:13.627130   19594 pod_ready.go:82] duration metric: took 4.279162ms for pod "kube-proxy-4z8bv" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:13.627140   19594 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:13.631136   19594 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:13.631159   19594 pod_ready.go:82] duration metric: took 4.00971ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:13.631173   19594 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-c2k6b" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:13.635218   19594 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-c2k6b" in "kube-system" namespace has status "Ready":"True"
	I0920 16:45:13.635235   19594 pod_ready.go:82] duration metric: took 4.054562ms for pod "nvidia-device-plugin-daemonset-c2k6b" in "kube-system" namespace to be "Ready" ...
	I0920 16:45:13.635245   19594 pod_ready.go:39] duration metric: took 10.041333736s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 16:45:13.635266   19594 api_server.go:52] waiting for apiserver process to appear ...
	I0920 16:45:13.635319   19594 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 16:45:13.653910   19594 api_server.go:72] duration metric: took 10.542575198s to wait for apiserver process to appear ...
	I0920 16:45:13.653931   19594 api_server.go:88] waiting for apiserver healthz status ...
	I0920 16:45:13.653958   19594 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 16:45:13.657763   19594 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 16:45:13.658674   19594 api_server.go:141] control plane version: v1.31.1
	I0920 16:45:13.658697   19594 api_server.go:131] duration metric: took 4.759044ms to wait for apiserver health ...
	I0920 16:45:13.658706   19594 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 16:45:13.819986   19594 system_pods.go:59] 16 kube-system pods found
	I0920 16:45:13.820019   19594 system_pods.go:61] "coredns-7c65d6cfc9-48qs6" [376bb7d3-255c-4beb-9c27-b35d4bd98a27] Running
	I0920 16:45:13.820028   19594 system_pods.go:61] "csi-hostpath-attacher-0" [b1cb0be0-26d2-4a26-9781-f3c2fbc7f08d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 16:45:13.820035   19594 system_pods.go:61] "csi-hostpath-resizer-0" [8fc51a98-53ab-4075-87e9-50633ba372bc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 16:45:13.820044   19594 system_pods.go:61] "csi-hostpathplugin-pgw8q" [70b5a0d8-d5f5-4712-93ca-dcb274c0f739] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 16:45:13.820049   19594 system_pods.go:61] "etcd-ubuntu-20-agent-2" [922d60d1-b58c-4147-b1dc-1a00f4eeeb25] Running
	I0920 16:45:13.820054   19594 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [64b9b3b3-ffdf-42e0-83f0-b13f88231b46] Running
	I0920 16:45:13.820060   19594 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [b7657e3a-50d7-4294-946b-e4813531fecf] Running
	I0920 16:45:13.820065   19594 system_pods.go:61] "kube-proxy-4z8bv" [4f24ae89-aadf-47b5-85f1-ea65df5a9426] Running
	I0920 16:45:13.820071   19594 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [2f92fe05-36e9-4d75-8026-fb6d0e248c33] Running
	I0920 16:45:13.820079   19594 system_pods.go:61] "metrics-server-84c5f94fbc-kmrlz" [6e334bf0-acd9-45f6-8232-8231952e001c] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 16:45:13.820085   19594 system_pods.go:61] "nvidia-device-plugin-daemonset-c2k6b" [14de1edd-c7d5-44d9-881f-cad9fc8dffde] Running
	I0920 16:45:13.820090   19594 system_pods.go:61] "registry-66c9cd494c-8c7tp" [be7ec7f6-7cec-4f63-bab2-8844fbb26f79] Running
	I0920 16:45:13.820096   19594 system_pods.go:61] "registry-proxy-9zk5q" [7bdaa858-4534-4dbd-b767-3de12e3d88ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 16:45:13.820102   19594 system_pods.go:61] "snapshot-controller-56fcc65765-jbh9v" [0a86461e-e296-4306-943e-9f440c47dce3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 16:45:13.820107   19594 system_pods.go:61] "snapshot-controller-56fcc65765-prdk4" [f9add925-1abf-48b7-86df-343057465374] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 16:45:13.820111   19594 system_pods.go:61] "storage-provisioner" [e1979350-1b14-4a06-9acb-a7845fc29294] Running
	I0920 16:45:13.820116   19594 system_pods.go:74] duration metric: took 161.403892ms to wait for pod list to return data ...
	I0920 16:45:13.820122   19594 default_sa.go:34] waiting for default service account to be created ...
	I0920 16:45:13.898565   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:14.015107   19594 default_sa.go:45] found service account: "default"
	I0920 16:45:14.015129   19594 default_sa.go:55] duration metric: took 195.001604ms for default service account to be created ...
	I0920 16:45:14.015136   19594 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 16:45:14.103666   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:14.220940   19594 system_pods.go:86] 16 kube-system pods found
	I0920 16:45:14.220967   19594 system_pods.go:89] "coredns-7c65d6cfc9-48qs6" [376bb7d3-255c-4beb-9c27-b35d4bd98a27] Running
	I0920 16:45:14.220979   19594 system_pods.go:89] "csi-hostpath-attacher-0" [b1cb0be0-26d2-4a26-9781-f3c2fbc7f08d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 16:45:14.220987   19594 system_pods.go:89] "csi-hostpath-resizer-0" [8fc51a98-53ab-4075-87e9-50633ba372bc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 16:45:14.220997   19594 system_pods.go:89] "csi-hostpathplugin-pgw8q" [70b5a0d8-d5f5-4712-93ca-dcb274c0f739] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 16:45:14.221004   19594 system_pods.go:89] "etcd-ubuntu-20-agent-2" [922d60d1-b58c-4147-b1dc-1a00f4eeeb25] Running
	I0920 16:45:14.221013   19594 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [64b9b3b3-ffdf-42e0-83f0-b13f88231b46] Running
	I0920 16:45:14.221024   19594 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [b7657e3a-50d7-4294-946b-e4813531fecf] Running
	I0920 16:45:14.221032   19594 system_pods.go:89] "kube-proxy-4z8bv" [4f24ae89-aadf-47b5-85f1-ea65df5a9426] Running
	I0920 16:45:14.221039   19594 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [2f92fe05-36e9-4d75-8026-fb6d0e248c33] Running
	I0920 16:45:14.221049   19594 system_pods.go:89] "metrics-server-84c5f94fbc-kmrlz" [6e334bf0-acd9-45f6-8232-8231952e001c] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 16:45:14.221056   19594 system_pods.go:89] "nvidia-device-plugin-daemonset-c2k6b" [14de1edd-c7d5-44d9-881f-cad9fc8dffde] Running
	I0920 16:45:14.221062   19594 system_pods.go:89] "registry-66c9cd494c-8c7tp" [be7ec7f6-7cec-4f63-bab2-8844fbb26f79] Running
	I0920 16:45:14.221071   19594 system_pods.go:89] "registry-proxy-9zk5q" [7bdaa858-4534-4dbd-b767-3de12e3d88ce] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 16:45:14.221080   19594 system_pods.go:89] "snapshot-controller-56fcc65765-jbh9v" [0a86461e-e296-4306-943e-9f440c47dce3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 16:45:14.221092   19594 system_pods.go:89] "snapshot-controller-56fcc65765-prdk4" [f9add925-1abf-48b7-86df-343057465374] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 16:45:14.221098   19594 system_pods.go:89] "storage-provisioner" [e1979350-1b14-4a06-9acb-a7845fc29294] Running
	I0920 16:45:14.221109   19594 system_pods.go:126] duration metric: took 205.964397ms to wait for k8s-apps to be running ...
	I0920 16:45:14.221121   19594 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 16:45:14.221171   19594 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0920 16:45:14.233099   19594 system_svc.go:56] duration metric: took 11.972413ms WaitForService to wait for kubelet
	I0920 16:45:14.233124   19594 kubeadm.go:582] duration metric: took 11.121796723s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 16:45:14.233140   19594 node_conditions.go:102] verifying NodePressure condition ...
	I0920 16:45:14.400042   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:14.416491   19594 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0920 16:45:14.416523   19594 node_conditions.go:123] node cpu capacity is 8
	I0920 16:45:14.416539   19594 node_conditions.go:105] duration metric: took 183.393858ms to run NodePressure ...
	I0920 16:45:14.416554   19594 start.go:241] waiting for startup goroutines ...
	I0920 16:45:14.592491   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:14.898174   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:15.092941   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:15.398244   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:15.592884   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:15.899876   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:16.092983   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:16.434295   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:16.593292   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:16.898512   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:17.092282   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:17.399049   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:17.593107   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:17.899307   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:18.104272   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:18.398574   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:18.592574   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:18.899672   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:19.093034   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:19.399635   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:19.592973   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:19.899725   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:20.093304   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:20.399493   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:20.592982   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:20.898978   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:21.099216   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:21.398618   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:21.593738   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:21.899279   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:22.093335   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:22.398513   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 16:45:22.593181   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:22.901022   19594 kapi.go:107] duration metric: took 19.005760315s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 16:45:23.093317   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:23.593663   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:24.093391   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:24.592831   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:25.093246   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:25.604406   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:26.104373   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:26.594032   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:27.092496   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:27.604108   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:28.092996   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:28.603995   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:29.103471   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:29.593335   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:30.093882   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:30.710036   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:31.104005   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:31.592995   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:32.092506   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:32.592596   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:33.094155   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:33.603970   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:34.093481   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:34.603518   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:35.092108   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:35.593166   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:36.093546   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:36.603368   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:37.093143   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:37.593074   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:38.093982   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:38.640081   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:39.093303   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:39.592894   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:40.093246   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:40.593408   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:41.093679   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:41.593607   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 16:45:42.104006   19594 kapi.go:107] duration metric: took 36.015460603s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 16:45:52.502298   19594 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 16:45:52.502327   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:53.002323   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:53.507422   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:54.002263   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:54.502434   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:55.001199   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:55.502330   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:56.001429   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:56.502080   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:57.001848   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:57.502220   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:58.002673   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:58.501553   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:59.001774   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:45:59.502109   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:00.002575   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:00.501622   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:01.001339   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:01.501569   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:02.001766   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:02.501872   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:03.001839   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:03.501954   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:04.002217   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:04.502435   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:05.001245   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:05.502421   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:06.001606   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:06.501754   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:07.001336   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:07.501544   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:08.001641   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:08.501747   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:09.001950   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:09.501866   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:10.002090   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:10.501937   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:11.001818   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:11.501819   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:12.002083   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:12.502026   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:13.001875   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:13.501900   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:14.002223   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:14.503063   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:15.002382   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:15.502049   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:16.001443   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:16.501424   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:17.001422   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:17.501838   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:18.001623   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:18.502954   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:19.002086   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:19.502397   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:20.001454   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:20.501455   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:21.002118   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:21.502014   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:22.002311   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:22.502666   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:23.001307   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:23.502977   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:24.001877   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:24.502024   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:25.002268   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:25.502280   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:26.003065   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:26.501958   19594 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 16:46:27.002052   19594 kapi.go:107] duration metric: took 1m16.00355175s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 16:46:27.003563   19594 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0920 16:46:27.004960   19594 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 16:46:27.006230   19594 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 16:46:27.007646   19594 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, cloud-spanner, metrics-server, storage-provisioner-rancher, storage-provisioner, inspektor-gadget, yakd, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0920 16:46:27.009203   19594 addons.go:510] duration metric: took 1m23.901694197s for enable addons: enabled=[nvidia-device-plugin default-storageclass cloud-spanner metrics-server storage-provisioner-rancher storage-provisioner inspektor-gadget yakd volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0920 16:46:27.009244   19594 start.go:246] waiting for cluster config update ...
	I0920 16:46:27.009258   19594 start.go:255] writing updated cluster config ...
	I0920 16:46:27.009493   19594 exec_runner.go:51] Run: rm -f paused
	I0920 16:46:27.052475   19594 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 16:46:27.054553   19594 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Wed 2024-08-07 18:08:31 UTC, end at Fri 2024-09-20 16:56:19 UTC. --
	Sep 20 16:47:51 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:47:51.504005020Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=eb25a3e46337e545 traceID=84390dab59c5ad9e70fc04dfdcfc5587
	Sep 20 16:47:51 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:47:51.506125748Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=eb25a3e46337e545 traceID=84390dab59c5ad9e70fc04dfdcfc5587
	Sep 20 16:48:31 ubuntu-20-agent-2 cri-dockerd[20140]: time="2024-09-20T16:48:31Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 20 16:48:32 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:48:32.505147433Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=b902693095f685ee traceID=7c6cf74371c1a3bad6e3556ce1d1dd31
	Sep 20 16:48:32 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:48:32.507143930Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=b902693095f685ee traceID=7c6cf74371c1a3bad6e3556ce1d1dd31
	Sep 20 16:48:32 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:48:32.896726893Z" level=info msg="ignoring event" container=e05d41be5b57769289988577a4ba80825a1558b3f27e0e8c013cd7f2d23b5f89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:49:56 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:49:56.509002605Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=0499bad85de9dc14 traceID=d887bbb31d5456c2cdfb705d2cb5e4af
	Sep 20 16:49:56 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:49:56.510819770Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=0499bad85de9dc14 traceID=d887bbb31d5456c2cdfb705d2cb5e4af
	Sep 20 16:51:19 ubuntu-20-agent-2 cri-dockerd[20140]: time="2024-09-20T16:51:19Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 20 16:51:20 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:51:20.927736334Z" level=info msg="ignoring event" container=761272c49907093faa0b8841ba76b785a29762b124047710326c703b523cc6b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:52:46 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:52:46.506028554Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=21b665320e391c53 traceID=7d9297b08aedd25f5d4127ea53b5200f
	Sep 20 16:52:46 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:52:46.508174803Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=21b665320e391c53 traceID=7d9297b08aedd25f5d4127ea53b5200f
	Sep 20 16:55:19 ubuntu-20-agent-2 cri-dockerd[20140]: time="2024-09-20T16:55:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6585c043c28edaa1f85cd6168102f26b883df41566d3e3b63d047ce5248c2334/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 20 16:55:19 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:55:19.567053458Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=26a9ced306de28e2 traceID=31e27a1a05e2ef1bea117b3244f834b8
	Sep 20 16:55:19 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:55:19.569055274Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=26a9ced306de28e2 traceID=31e27a1a05e2ef1bea117b3244f834b8
	Sep 20 16:55:32 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:55:32.500023546Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=8853d6cbdd932768 traceID=7f10c73c88caa084e5cab12a0e2168d0
	Sep 20 16:55:32 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:55:32.502266435Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=8853d6cbdd932768 traceID=7f10c73c88caa084e5cab12a0e2168d0
	Sep 20 16:56:01 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:56:01.494598498Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=83ffe8c570fd226b traceID=fa8ff08aeca36425f434a8e8fd0d1b5d
	Sep 20 16:56:01 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:56:01.496770565Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=83ffe8c570fd226b traceID=fa8ff08aeca36425f434a8e8fd0d1b5d
	Sep 20 16:56:19 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:56:19.042282190Z" level=info msg="ignoring event" container=6585c043c28edaa1f85cd6168102f26b883df41566d3e3b63d047ce5248c2334 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:56:19 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:56:19.304853545Z" level=info msg="ignoring event" container=3b77e8ce3973de48519d5e3f1462ffb5c19bb2238b9610ae64ee0ed8e6cdacfe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:56:19 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:56:19.363417344Z" level=info msg="ignoring event" container=3499c33f6dc7f72c07fd07e64b7a203c8ea0d6100d362cf0a01f08fd49be947d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:56:19 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:56:19.442165292Z" level=info msg="ignoring event" container=7bcfe028e3c17ae776dc7da8b5ff8d2ba8ac38904882095578441f922672050b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 16:56:19 ubuntu-20-agent-2 cri-dockerd[20140]: time="2024-09-20T16:56:19Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-proxy-9zk5q_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 20 16:56:19 ubuntu-20-agent-2 dockerd[19811]: time="2024-09-20T16:56:19.523056670Z" level=info msg="ignoring event" container=f59fd69f5f624a64712267dd52b9d7ddcf6be20e37547c0bf37be26dff631f2d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	761272c499070       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            5 minutes ago       Exited              gadget                                   6                   a3702797bf688       gadget-rqjhz
	b824c446317f0       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   0b6d83a6662a5       gcp-auth-89d5ffd79-l7c52
	9233cc1a286bb       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          10 minutes ago      Running             csi-snapshotter                          0                   fe371be30d5df       csi-hostpathplugin-pgw8q
	c1d3e9e939246       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          10 minutes ago      Running             csi-provisioner                          0                   fe371be30d5df       csi-hostpathplugin-pgw8q
	e55441060dd21       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            10 minutes ago      Running             liveness-probe                           0                   fe371be30d5df       csi-hostpathplugin-pgw8q
	1b5567bbc8b9c       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           10 minutes ago      Running             hostpath                                 0                   fe371be30d5df       csi-hostpathplugin-pgw8q
	8de13437db741       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                10 minutes ago      Running             node-driver-registrar                    0                   fe371be30d5df       csi-hostpathplugin-pgw8q
	6282e18c0c588       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   10 minutes ago      Running             csi-external-health-monitor-controller   0                   fe371be30d5df       csi-hostpathplugin-pgw8q
	d8ea8dd70dd1e       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             10 minutes ago      Running             csi-attacher                             0                   62deafb3d43de       csi-hostpath-attacher-0
	de3920beebf4e       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              10 minutes ago      Running             csi-resizer                              0                   ca72ed8e700de       csi-hostpath-resizer-0
	6ee920900318e       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   74b81a5c7cf84       snapshot-controller-56fcc65765-jbh9v
	1c5caf88fe99f       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   e418ffd829e68       snapshot-controller-56fcc65765-prdk4
	bada2cb22b84e       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        10 minutes ago      Running             yakd                                     0                   764c7f4a16df2       yakd-dashboard-67d98fc6b-bncjr
	3499c33f6dc7f       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              10 minutes ago      Exited              registry-proxy                           0                   f59fd69f5f624       registry-proxy-9zk5q
	2224374459b01       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       11 minutes ago      Running             local-path-provisioner                   0                   257ecc7c29379       local-path-provisioner-86d989889c-8h6tk
	bfd9cd31093ba       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               11 minutes ago      Running             cloud-spanner-emulator                   0                   8a48b35ba47c2       cloud-spanner-emulator-769b77f747-q25bc
	8487a72983672       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        11 minutes ago      Running             metrics-server                           0                   8e6c9063ffa4c       metrics-server-84c5f94fbc-kmrlz
	3b77e8ce3973d       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                                             11 minutes ago      Exited              registry                                 0                   7bcfe028e3c17       registry-66c9cd494c-8c7tp
	d76b642064b61       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     11 minutes ago      Running             nvidia-device-plugin-ctr                 0                   0ffc5659b2f6f       nvidia-device-plugin-daemonset-c2k6b
	bb39a019aea73       6e38f40d628db                                                                                                                                11 minutes ago      Running             storage-provisioner                      0                   7653e9d16f06b       storage-provisioner
	ddb373bc098c1       c69fa2e9cbf5f                                                                                                                                11 minutes ago      Running             coredns                                  0                   ae5ee16ca3aa4       coredns-7c65d6cfc9-48qs6
	64711f7fb5fe2       60c005f310ff3                                                                                                                                11 minutes ago      Running             kube-proxy                               0                   78f0aeb605b90       kube-proxy-4z8bv
	8b1d9d632055c       6bab7719df100                                                                                                                                11 minutes ago      Running             kube-apiserver                           0                   a6cad0f7b31b9       kube-apiserver-ubuntu-20-agent-2
	d12067daeed64       2e96e5913fc06                                                                                                                                11 minutes ago      Running             etcd                                     0                   12ab18da970db       etcd-ubuntu-20-agent-2
	d4a743beacae2       9aa1fad941575                                                                                                                                11 minutes ago      Running             kube-scheduler                           0                   af2e2f749d164       kube-scheduler-ubuntu-20-agent-2
	8c1debcecf77e       175ffd71cce3d                                                                                                                                11 minutes ago      Running             kube-controller-manager                  0                   6f7d19d2fe08f       kube-controller-manager-ubuntu-20-agent-2
	
	
	==> coredns [ddb373bc098c] <==
	[INFO] 10.244.0.8:34085 - 25797 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066024s
	[INFO] 10.244.0.8:53200 - 38483 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000045025s
	[INFO] 10.244.0.8:53200 - 18005 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078633s
	[INFO] 10.244.0.8:60849 - 15352 "A IN registry.kube-system.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000060848s
	[INFO] 10.244.0.8:60849 - 56314 "AAAA IN registry.kube-system.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000093917s
	[INFO] 10.244.0.8:36641 - 28747 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000073202s
	[INFO] 10.244.0.8:36641 - 64584 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000107707s
	[INFO] 10.244.0.8:45074 - 51864 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000065108s
	[INFO] 10.244.0.8:45074 - 59802 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000095049s
	[INFO] 10.244.0.8:39482 - 34201 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000072183s
	[INFO] 10.244.0.8:39482 - 1182 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000119903s
	[INFO] 10.244.0.23:52062 - 45329 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00028552s
	[INFO] 10.244.0.23:49914 - 11992 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000135866s
	[INFO] 10.244.0.23:46422 - 53216 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000132762s
	[INFO] 10.244.0.23:54219 - 20616 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000144431s
	[INFO] 10.244.0.23:46848 - 38267 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000143456s
	[INFO] 10.244.0.23:53945 - 64382 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000221399s
	[INFO] 10.244.0.23:43151 - 31082 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003237774s
	[INFO] 10.244.0.23:58751 - 12471 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.005385542s
	[INFO] 10.244.0.23:35640 - 56857 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003051086s
	[INFO] 10.244.0.23:53411 - 36226 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003179774s
	[INFO] 10.244.0.23:50848 - 16832 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003459792s
	[INFO] 10.244.0.23:49092 - 16708 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003632444s
	[INFO] 10.244.0.23:41497 - 32368 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001446385s
	[INFO] 10.244.0.23:53917 - 52470 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.002049886s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0626f22cf0d915d75e291a5bce701f94395056e1
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T16_44_58_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-2
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-2"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 16:44:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 16:56:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 16:52:06 +0000   Fri, 20 Sep 2024 16:44:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 16:52:06 +0000   Fri, 20 Sep 2024 16:44:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 16:52:06 +0000   Fri, 20 Sep 2024 16:44:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 16:52:06 +0000   Fri, 20 Sep 2024 16:44:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    0fd695e7-50c5-4838-9acc-b2d1cdaf04a4
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  default                     cloud-spanner-emulator-769b77f747-q25bc      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-rqjhz                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-l7c52                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-48qs6                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-pgw8q                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-4z8bv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-kmrlz              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-c2k6b         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-jbh9v         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-prdk4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-8h6tk      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-bncjr               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 11m   kube-proxy       
	  Normal   Starting                 11m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  11m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m   kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m   kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m   kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m   node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 0e 35 92 0d 81 08 06
	[  +0.033126] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 ed 9b 64 03 38 08 06
	[  +2.557801] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ce 8e 2f 33 38 66 08 06
	[  +1.916982] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 1e 57 a1 e9 51 08 06
	[  +3.689238] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 82 f5 1c a3 4e 08 06
	[  +2.838983] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a2 c9 4a b5 06 2a 08 06
	[  +0.097061] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 02 ac 0e a0 18 49 08 06
	[  +0.186938] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be 2b 31 ce 69 e1 08 06
	[  +0.043588] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 64 a8 66 ec 3c 08 06
	[Sep20 16:46] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 02 19 d5 e5 94 81 08 06
	[  +0.028237] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2e 64 ce 6f 7e 5a 08 06
	[ +10.731072] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 36 d3 ac e2 03 bd 08 06
	[  +0.000441] IPv4: martian source 10.244.0.23 from 10.244.0.5, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 8a 4e 41 44 28 37 08 06
	
	
	==> etcd [d12067daeed6] <==
	{"level":"info","ts":"2024-09-20T16:44:54.668119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-20T16:44:54.668146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgPreVoteResp from 6b435b960bec7c3c at term 1"}
	{"level":"info","ts":"2024-09-20T16:44:54.668161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became candidate at term 2"}
	{"level":"info","ts":"2024-09-20T16:44:54.668168Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-20T16:44:54.668179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 2"}
	{"level":"info","ts":"2024-09-20T16:44:54.668189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-20T16:44:54.669018Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T16:44:54.669023Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T16:44:54.669046Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T16:44:54.669189Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T16:44:54.669229Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T16:44:54.669328Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T16:44:54.670241Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T16:44:54.670296Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T16:44:54.670306Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T16:44:54.670326Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T16:44:54.670426Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T16:44:54.671298Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-20T16:44:54.671500Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T16:45:21.392027Z","caller":"traceutil/trace.go:171","msg":"trace[534191523] transaction","detail":"{read_only:false; response_revision:911; number_of_response:1; }","duration":"116.649023ms","start":"2024-09-20T16:45:21.275363Z","end":"2024-09-20T16:45:21.392012Z","steps":["trace[534191523] 'process raft request'  (duration: 116.545008ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T16:45:30.707679Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.905127ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T16:45:30.707742Z","caller":"traceutil/trace.go:171","msg":"trace[1531925896] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:959; }","duration":"117.008974ms","start":"2024-09-20T16:45:30.590722Z","end":"2024-09-20T16:45:30.707731Z","steps":["trace[1531925896] 'range keys from in-memory index tree'  (duration: 116.857355ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T16:54:54.972943Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1679}
	{"level":"info","ts":"2024-09-20T16:54:54.995958Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1679,"took":"22.529422ms","hash":3710773180,"current-db-size-bytes":8122368,"current-db-size":"8.1 MB","current-db-size-in-use-bytes":4227072,"current-db-size-in-use":"4.2 MB"}
	{"level":"info","ts":"2024-09-20T16:54:54.996008Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3710773180,"revision":1679,"compact-revision":-1}
	
	
	==> gcp-auth [b824c446317f] <==
	2024/09/20 16:46:26 GCP Auth Webhook started!
	2024/09/20 16:46:42 Ready to marshal response ...
	2024/09/20 16:46:42 Ready to write response ...
	2024/09/20 16:46:43 Ready to marshal response ...
	2024/09/20 16:46:43 Ready to write response ...
	2024/09/20 16:47:05 Ready to marshal response ...
	2024/09/20 16:47:05 Ready to write response ...
	2024/09/20 16:47:05 Ready to marshal response ...
	2024/09/20 16:47:05 Ready to write response ...
	2024/09/20 16:47:05 Ready to marshal response ...
	2024/09/20 16:47:05 Ready to write response ...
	2024/09/20 16:55:18 Ready to marshal response ...
	2024/09/20 16:55:18 Ready to write response ...
	
	
	==> kernel <==
	 16:56:20 up 38 min,  0 users,  load average: 0.14, 0.31, 0.35
	Linux ubuntu-20-agent-2 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [8b1d9d632055] <==
	W0920 16:45:45.078414       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.226.206:443: connect: connection refused
	W0920 16:45:52.003518       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.136.232:443: connect: connection refused
	E0920 16:45:52.003558       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.136.232:443: connect: connection refused" logger="UnhandledError"
	W0920 16:46:14.022130       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.136.232:443: connect: connection refused
	E0920 16:46:14.022166       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.136.232:443: connect: connection refused" logger="UnhandledError"
	W0920 16:46:14.030024       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.136.232:443: connect: connection refused
	E0920 16:46:14.030070       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.136.232:443: connect: connection refused" logger="UnhandledError"
	I0920 16:46:42.298640       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0920 16:46:42.314913       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0920 16:46:55.680899       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0920 16:46:55.691765       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0920 16:46:55.810373       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0920 16:46:55.815031       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0920 16:46:55.827739       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0920 16:46:55.863481       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0920 16:46:55.992205       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0920 16:46:56.005837       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0920 16:46:56.025290       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0920 16:46:56.708762       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0920 16:46:56.852938       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0920 16:46:56.863912       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0920 16:46:56.964024       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0920 16:46:56.964052       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0920 16:46:57.025425       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0920 16:46:57.199533       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	
	
	==> kube-controller-manager [8c1debcecf77] <==
	W0920 16:55:19.000015       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:55:19.000059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:55:21.307948       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:55:21.307990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:55:21.743080       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:55:21.743122       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:55:22.158740       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:55:22.158784       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:55:25.754361       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:55:25.754403       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:55:36.376454       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:55:36.376503       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:55:39.380105       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:55:39.380147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:55:54.077922       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:55:54.077966       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:55:54.266841       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:55:54.266890       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:56:13.234577       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:56:13.234624       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:56:14.906970       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:56:14.907012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 16:56:19.045721       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 16:56:19.045768       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 16:56:19.270096       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="4.693µs"
	
	
	==> kube-proxy [64711f7fb5fe] <==
	I0920 16:45:04.799249       1 server_linux.go:66] "Using iptables proxy"
	I0920 16:45:04.952709       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0920 16:45:04.952795       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 16:45:05.080907       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0920 16:45:05.080969       1 server_linux.go:169] "Using iptables Proxier"
	I0920 16:45:05.086446       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 16:45:05.086788       1 server.go:483] "Version info" version="v1.31.1"
	I0920 16:45:05.086818       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 16:45:05.089729       1 config.go:199] "Starting service config controller"
	I0920 16:45:05.089757       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 16:45:05.089780       1 config.go:105] "Starting endpoint slice config controller"
	I0920 16:45:05.089789       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 16:45:05.090865       1 config.go:328] "Starting node config controller"
	I0920 16:45:05.090882       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 16:45:05.190317       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 16:45:05.190401       1 shared_informer.go:320] Caches are synced for service config
	I0920 16:45:05.191197       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d4a743beacae] <==
	W0920 16:44:55.903435       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 16:44:55.903461       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:55.903513       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0920 16:44:55.903514       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 16:44:55.903539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0920 16:44:55.903543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:55.903739       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0920 16:44:55.903757       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 16:44:55.903771       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0920 16:44:55.903779       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:56.759622       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 16:44:56.759661       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:56.810431       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 16:44:56.810470       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:56.820979       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 16:44:56.821017       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:56.933897       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 16:44:56.933945       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:56.995374       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 16:44:56.995410       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 16:44:57.024903       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 16:44:57.024942       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 16:44:57.042268       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 16:44:57.042314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 16:44:59.602146       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Wed 2024-08-07 18:08:31 UTC, end at Fri 2024-09-20 16:56:20 UTC. --
	Sep 20 16:55:46 ubuntu-20-agent-2 kubelet[21031]: E0920 16:55:46.352623   21031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="82d10f6c-9141-435e-ab8e-d1cb8af8b80a"
	Sep 20 16:55:48 ubuntu-20-agent-2 kubelet[21031]: I0920 16:55:48.350926   21031 scope.go:117] "RemoveContainer" containerID="761272c49907093faa0b8841ba76b785a29762b124047710326c703b523cc6b3"
	Sep 20 16:55:48 ubuntu-20-agent-2 kubelet[21031]: E0920 16:55:48.351096   21031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-rqjhz_gadget(b04b979f-1bd7-4335-87c3-a1abf4133b06)\"" pod="gadget/gadget-rqjhz" podUID="b04b979f-1bd7-4335-87c3-a1abf4133b06"
	Sep 20 16:55:58 ubuntu-20-agent-2 kubelet[21031]: E0920 16:55:58.353386   21031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="cb6d2fa1-ad89-4feb-92c6-a1bec468fff3"
	Sep 20 16:56:01 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:01.350196   21031 scope.go:117] "RemoveContainer" containerID="761272c49907093faa0b8841ba76b785a29762b124047710326c703b523cc6b3"
	Sep 20 16:56:01 ubuntu-20-agent-2 kubelet[21031]: E0920 16:56:01.350379   21031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-rqjhz_gadget(b04b979f-1bd7-4335-87c3-a1abf4133b06)\"" pod="gadget/gadget-rqjhz" podUID="b04b979f-1bd7-4335-87c3-a1abf4133b06"
	Sep 20 16:56:01 ubuntu-20-agent-2 kubelet[21031]: E0920 16:56:01.497266   21031 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" image="gcr.io/k8s-minikube/busybox:latest"
	Sep 20 16:56:01 ubuntu-20-agent-2 kubelet[21031]: E0920 16:56:01.497439   21031 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-test,Image:gcr.io/k8s-minikube/busybox,Command:[],Args:[sh -c wget --spider -S http://registry.kube-system.svc.cluster.local],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t9k7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:n
il,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:true,StdinOnce:true,TTY:true,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod registry-test_default(82d10f6c-9141-435e-ab8e-d1cb8af8b80a): ErrImagePull: Error response from daemon: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" logger="UnhandledError"
	Sep 20 16:56:01 ubuntu-20-agent-2 kubelet[21031]: E0920 16:56:01.498623   21031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ErrImagePull: \"Error response from daemon: Head \\\"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\\\": unauthorized: authentication failed\"" pod="default/registry-test" podUID="82d10f6c-9141-435e-ab8e-d1cb8af8b80a"
	Sep 20 16:56:11 ubuntu-20-agent-2 kubelet[21031]: E0920 16:56:11.352881   21031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="cb6d2fa1-ad89-4feb-92c6-a1bec468fff3"
	Sep 20 16:56:14 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:14.350543   21031 scope.go:117] "RemoveContainer" containerID="761272c49907093faa0b8841ba76b785a29762b124047710326c703b523cc6b3"
	Sep 20 16:56:14 ubuntu-20-agent-2 kubelet[21031]: E0920 16:56:14.350774   21031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-rqjhz_gadget(b04b979f-1bd7-4335-87c3-a1abf4133b06)\"" pod="gadget/gadget-rqjhz" podUID="b04b979f-1bd7-4335-87c3-a1abf4133b06"
	Sep 20 16:56:15 ubuntu-20-agent-2 kubelet[21031]: E0920 16:56:15.352790   21031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="82d10f6c-9141-435e-ab8e-d1cb8af8b80a"
	Sep 20 16:56:19 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:19.207049   21031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/82d10f6c-9141-435e-ab8e-d1cb8af8b80a-gcp-creds\") pod \"82d10f6c-9141-435e-ab8e-d1cb8af8b80a\" (UID: \"82d10f6c-9141-435e-ab8e-d1cb8af8b80a\") "
	Sep 20 16:56:19 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:19.207116   21031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9k7k\" (UniqueName: \"kubernetes.io/projected/82d10f6c-9141-435e-ab8e-d1cb8af8b80a-kube-api-access-t9k7k\") pod \"82d10f6c-9141-435e-ab8e-d1cb8af8b80a\" (UID: \"82d10f6c-9141-435e-ab8e-d1cb8af8b80a\") "
	Sep 20 16:56:19 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:19.207124   21031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82d10f6c-9141-435e-ab8e-d1cb8af8b80a-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "82d10f6c-9141-435e-ab8e-d1cb8af8b80a" (UID: "82d10f6c-9141-435e-ab8e-d1cb8af8b80a"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 20 16:56:19 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:19.207205   21031 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/82d10f6c-9141-435e-ab8e-d1cb8af8b80a-gcp-creds\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 20 16:56:19 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:19.209045   21031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82d10f6c-9141-435e-ab8e-d1cb8af8b80a-kube-api-access-t9k7k" (OuterVolumeSpecName: "kube-api-access-t9k7k") pod "82d10f6c-9141-435e-ab8e-d1cb8af8b80a" (UID: "82d10f6c-9141-435e-ab8e-d1cb8af8b80a"). InnerVolumeSpecName "kube-api-access-t9k7k". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 16:56:19 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:19.308272   21031 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-t9k7k\" (UniqueName: \"kubernetes.io/projected/82d10f6c-9141-435e-ab8e-d1cb8af8b80a-kube-api-access-t9k7k\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 20 16:56:19 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:19.610186   21031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxjgr\" (UniqueName: \"kubernetes.io/projected/be7ec7f6-7cec-4f63-bab2-8844fbb26f79-kube-api-access-pxjgr\") pod \"be7ec7f6-7cec-4f63-bab2-8844fbb26f79\" (UID: \"be7ec7f6-7cec-4f63-bab2-8844fbb26f79\") "
	Sep 20 16:56:19 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:19.610230   21031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-494s2\" (UniqueName: \"kubernetes.io/projected/7bdaa858-4534-4dbd-b767-3de12e3d88ce-kube-api-access-494s2\") pod \"7bdaa858-4534-4dbd-b767-3de12e3d88ce\" (UID: \"7bdaa858-4534-4dbd-b767-3de12e3d88ce\") "
	Sep 20 16:56:19 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:19.612531   21031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bdaa858-4534-4dbd-b767-3de12e3d88ce-kube-api-access-494s2" (OuterVolumeSpecName: "kube-api-access-494s2") pod "7bdaa858-4534-4dbd-b767-3de12e3d88ce" (UID: "7bdaa858-4534-4dbd-b767-3de12e3d88ce"). InnerVolumeSpecName "kube-api-access-494s2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 16:56:19 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:19.612654   21031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be7ec7f6-7cec-4f63-bab2-8844fbb26f79-kube-api-access-pxjgr" (OuterVolumeSpecName: "kube-api-access-pxjgr") pod "be7ec7f6-7cec-4f63-bab2-8844fbb26f79" (UID: "be7ec7f6-7cec-4f63-bab2-8844fbb26f79"). InnerVolumeSpecName "kube-api-access-pxjgr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 16:56:19 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:19.711054   21031 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pxjgr\" (UniqueName: \"kubernetes.io/projected/be7ec7f6-7cec-4f63-bab2-8844fbb26f79-kube-api-access-pxjgr\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 20 16:56:19 ubuntu-20-agent-2 kubelet[21031]: I0920 16:56:19.711086   21031 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-494s2\" (UniqueName: \"kubernetes.io/projected/7bdaa858-4534-4dbd-b767-3de12e3d88ce-kube-api-access-494s2\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	
	
	==> storage-provisioner [bb39a019aea7] <==
	I0920 16:45:05.601816       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 16:45:05.623279       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 16:45:05.623351       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 16:45:05.635294       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 16:45:05.635539       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_5109ecaa-ff23-4456-9819-6940036e747f!
	I0920 16:45:05.636938       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"12ed9626-6ef3-4ef7-a1fd-06f621a5fa2e", APIVersion:"v1", ResourceVersion:"589", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_5109ecaa-ff23-4456-9819-6940036e747f became leader
	I0920 16:45:05.736661       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_5109ecaa-ff23-4456-9819-6940036e747f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox registry-66c9cd494c-8c7tp registry-proxy-9zk5q
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod busybox registry-66c9cd494c-8c7tp registry-proxy-9zk5q
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context minikube describe pod busybox registry-66c9cd494c-8c7tp registry-proxy-9zk5q: exit status 1 (80.048525ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ubuntu-20-agent-2/10.138.0.48
	Start Time:       Fri, 20 Sep 2024 16:47:05 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x5bcm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-x5bcm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m15s                  default-scheduler  Successfully assigned default/busybox to ubuntu-20-agent-2
	  Normal   Pulling    7m48s (x4 over 9m14s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m48s (x4 over 9m14s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m48s (x4 over 9m14s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m20s (x6 over 9m13s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m3s (x20 over 9m13s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "registry-66c9cd494c-8c7tp" not found
	Error from server (NotFound): pods "registry-proxy-9zk5q" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context minikube describe pod busybox registry-66c9cd494c-8c7tp registry-proxy-9zk5q: exit status 1
--- FAIL: TestAddons/parallel/Registry (72.82s)

                                                
                                    

Test pass (110/167)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 1.99
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 0.97
15 TestDownloadOnly/v1.31.1/binaries 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.53
22 TestOffline 75.04
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.04
27 TestAddons/Setup 100.7
29 TestAddons/serial/Volcano 38.42
31 TestAddons/serial/GCPAuth/Namespaces 0.12
35 TestAddons/parallel/InspektorGadget 10.45
36 TestAddons/parallel/MetricsServer 5.37
38 TestAddons/parallel/CSI 39.68
39 TestAddons/parallel/Headlamp 14.88
40 TestAddons/parallel/CloudSpanner 6.24
42 TestAddons/parallel/NvidiaDevicePlugin 6.22
43 TestAddons/parallel/Yakd 10.39
44 TestAddons/StoppedEnableDisable 10.74
46 TestCertExpiration 228.82
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 26.94
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 32.4
61 TestFunctional/serial/KubeContext 0.04
62 TestFunctional/serial/KubectlGetPods 0.07
64 TestFunctional/serial/MinikubeKubectlCmd 0.1
65 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
66 TestFunctional/serial/ExtraConfig 38.82
67 TestFunctional/serial/ComponentHealth 0.07
68 TestFunctional/serial/LogsCmd 0.78
69 TestFunctional/serial/LogsFileCmd 0.82
70 TestFunctional/serial/InvalidService 3.75
72 TestFunctional/parallel/ConfigCmd 0.27
73 TestFunctional/parallel/DashboardCmd 7.53
74 TestFunctional/parallel/DryRun 0.17
75 TestFunctional/parallel/InternationalLanguage 0.08
76 TestFunctional/parallel/StatusCmd 0.42
79 TestFunctional/parallel/ProfileCmd/profile_not_create 0.21
80 TestFunctional/parallel/ProfileCmd/profile_list 0.19
81 TestFunctional/parallel/ProfileCmd/profile_json_output 0.2
83 TestFunctional/parallel/ServiceCmd/DeployApp 9.14
84 TestFunctional/parallel/ServiceCmd/List 0.33
85 TestFunctional/parallel/ServiceCmd/JSONOutput 0.32
86 TestFunctional/parallel/ServiceCmd/HTTPS 0.15
87 TestFunctional/parallel/ServiceCmd/Format 0.15
88 TestFunctional/parallel/ServiceCmd/URL 0.15
89 TestFunctional/parallel/ServiceCmdConnect 7.29
90 TestFunctional/parallel/AddonsCmd 0.11
91 TestFunctional/parallel/PersistentVolumeClaim 21.95
94 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.26
95 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
97 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.18
98 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
99 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
103 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
106 TestFunctional/parallel/MySQL 20.85
110 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
111 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 12.78
112 TestFunctional/parallel/UpdateContextCmd/no_clusters 12.28
115 TestFunctional/parallel/NodeLabels 0.06
119 TestFunctional/parallel/Version/short 0.04
120 TestFunctional/parallel/Version/components 0.38
121 TestFunctional/parallel/License 0.26
122 TestFunctional/delete_echo-server_images 0.03
123 TestFunctional/delete_my-image_image 0.01
124 TestFunctional/delete_minikube_cached_images 0.01
129 TestImageBuild/serial/Setup 13.9
130 TestImageBuild/serial/NormalBuild 1.54
131 TestImageBuild/serial/BuildWithBuildArg 0.85
132 TestImageBuild/serial/BuildWithDockerIgnore 0.57
133 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.57
137 TestJSONOutput/start/Command 26.28
138 TestJSONOutput/start/Audit 0
140 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
141 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
143 TestJSONOutput/pause/Command 0.47
144 TestJSONOutput/pause/Audit 0
146 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
147 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
149 TestJSONOutput/unpause/Command 0.38
150 TestJSONOutput/unpause/Audit 0
152 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
155 TestJSONOutput/stop/Command 10.41
156 TestJSONOutput/stop/Audit 0
158 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
160 TestErrorJSONOutput 0.19
165 TestMainNoArgs 0.04
166 TestMinikubeProfile 34.26
174 TestPause/serial/Start 29.05
175 TestPause/serial/SecondStartNoReconfiguration 25.46
176 TestPause/serial/Pause 0.49
177 TestPause/serial/VerifyStatus 0.12
178 TestPause/serial/Unpause 0.42
179 TestPause/serial/PauseAgain 0.53
180 TestPause/serial/DeletePaused 1.6
181 TestPause/serial/VerifyDeletedResources 0.06
195 TestRunningBinaryUpgrade 67.17
197 TestStoppedBinaryUpgrade/Setup 1.03
198 TestStoppedBinaryUpgrade/Upgrade 50.38
199 TestStoppedBinaryUpgrade/MinikubeLogs 0.77
200 TestKubernetesUpgrade 315.93
x
+
TestDownloadOnly/v1.20.0/json-events (1.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (1.986983783s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (1.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (54.889837ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 16:43:26
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 16:43:26.968876   15551 out.go:345] Setting OutFile to fd 1 ...
	I0920 16:43:26.968996   15551 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:43:26.969006   15551 out.go:358] Setting ErrFile to fd 2...
	I0920 16:43:26.969010   15551 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:43:26.969179   15551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8660/.minikube/bin
	W0920 16:43:26.969287   15551 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19672-8660/.minikube/config/config.json: open /home/jenkins/minikube-integration/19672-8660/.minikube/config/config.json: no such file or directory
	I0920 16:43:26.969811   15551 out.go:352] Setting JSON to true
	I0920 16:43:26.970669   15551 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1559,"bootTime":1726849048,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 16:43:26.970796   15551 start.go:139] virtualization: kvm guest
	I0920 16:43:26.973116   15551 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0920 16:43:26.973230   15551 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19672-8660/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 16:43:26.973293   15551 notify.go:220] Checking for updates...
	I0920 16:43:26.974570   15551 out.go:169] MINIKUBE_LOCATION=19672
	I0920 16:43:26.975981   15551 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 16:43:26.977312   15551 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19672-8660/kubeconfig
	I0920 16:43:26.978586   15551 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8660/.minikube
	I0920 16:43:26.980010   15551 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (0.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm
--- PASS: TestDownloadOnly/v1.31.1/json-events (0.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
--- PASS: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (55.343326ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC | 20 Sep 24 16:43 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 20 Sep 24 16:43 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 16:43:29
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 16:43:29.238787   15708 out.go:345] Setting OutFile to fd 1 ...
	I0920 16:43:29.238886   15708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:43:29.238893   15708 out.go:358] Setting ErrFile to fd 2...
	I0920 16:43:29.238898   15708 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:43:29.239054   15708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8660/.minikube/bin
	I0920 16:43:29.239583   15708 out.go:352] Setting JSON to true
	I0920 16:43:29.240405   15708 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1561,"bootTime":1726849048,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 16:43:29.240495   15708 start.go:139] virtualization: kvm guest
	I0920 16:43:29.242613   15708 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0920 16:43:29.242699   15708 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19672-8660/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 16:43:29.242733   15708 notify.go:220] Checking for updates...
	I0920 16:43:29.244182   15708 out.go:169] MINIKUBE_LOCATION=19672
	I0920 16:43:29.245710   15708 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 16:43:29.247139   15708 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19672-8660/kubeconfig
	I0920 16:43:29.248399   15708 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8660/.minikube
	I0920 16:43:29.249634   15708 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.53s)

                                                
                                                
=== RUN   TestBinaryMirror
I0920 16:43:30.689786   15539 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:40853 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.53s)

                                                
                                    
x
+
TestOffline (75.04s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (1m13.454963445s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.585087441s)
--- PASS: TestOffline (75.04s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (47.978977ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (44.382275ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/Setup (100.7s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm: (1m40.70408099s)
--- PASS: TestAddons/Setup (100.70s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.42s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:843: volcano-admission stabilized in 8.539749ms
addons_test.go:851: volcano-controller stabilized in 8.5721ms
addons_test.go:835: volcano-scheduler stabilized in 8.586919ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-qkwfj" [fd29d236-918d-49ec-858d-cdd6c86f44f1] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003362277s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-wnbkt" [aa478073-4768-4ace-a380-9661de411730] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.002957978s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-7flwx" [d794013e-5c2f-4fcc-b920-5bf24c89745a] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003740171s
addons_test.go:870: (dbg) Run:  kubectl --context minikube delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context minikube create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [1947243a-e0a5-4021-8d80-bda7417d7430] Pending
helpers_test.go:344: "test-job-nginx-0" [1947243a-e0a5-4021-8d80-bda7417d7430] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [1947243a-e0a5-4021-8d80-bda7417d7430] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.003712852s
addons_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.111232148s)
--- PASS: TestAddons/serial/Volcano (38.42s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.45s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-rqjhz" [b04b979f-1bd7-4335-87c3-a1abf4133b06] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004097908s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube: (5.439851113s)
--- PASS: TestAddons/parallel/InspektorGadget (10.45s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.37s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.041194ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-kmrlz" [6e334bf0-acd9-45f6-8232-8231952e001c] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003505121s
addons_test.go:413: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.37s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
I0920 16:56:36.400998   15539 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 16:56:36.404928   15539 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 16:56:36.404952   15539 kapi.go:107] duration metric: took 3.967604ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 3.975851ms
addons_test.go:508: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [acb1228e-be49-49ee-878e-8cc53f79bd25] Pending
helpers_test.go:344: "task-pv-pod" [acb1228e-be49-49ee-878e-8cc53f79bd25] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [acb1228e-be49-49ee-878e-8cc53f79bd25] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003839748s
addons_test.go:528: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context minikube delete pod task-pv-pod: (1.10638211s)
addons_test.go:544: (dbg) Run:  kubectl --context minikube delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5a212de8-6185-4d4f-8766-706ae2b37ead] Pending
helpers_test.go:344: "task-pv-pod-restore" [5a212de8-6185-4d4f-8766-706ae2b37ead] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5a212de8-6185-4d4f-8766-706ae2b37ead] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003375391s
addons_test.go:570: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context minikube delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context minikube delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.287885228s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (39.68s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-lccng" [4367bd05-acd9-4786-8f66-f552bc5db7ab] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-lccng" [4367bd05-acd9-4786-8f66-f552bc5db7ab] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.003514187s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1: (5.387313958s)
--- PASS: TestAddons/parallel/Headlamp (14.88s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-q25bc" [2726aa73-ec05-4c52-9f4c-57a4a748ea25] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003440812s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (6.24s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.22s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-c2k6b" [14de1edd-c7d5-44d9-881f-cad9fc8dffde] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004118612s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p minikube
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.22s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-bncjr" [0be6768a-4763-4c27-b93a-8706bc3f609d] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003279702s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.381678767s)
--- PASS: TestAddons/parallel/Yakd (10.39s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.74s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.454326351s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.74s)

                                                
                                    
x
+
TestCertExpiration (228.82s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (14.111521423s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (32.862132087s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.847928504s)
--- PASS: TestCertExpiration (228.82s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19672-8660/.minikube/files/etc/test/nested/copy/15539/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (26.94s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (26.93657308s)
--- PASS: TestFunctional/serial/StartWithProxy (26.94s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.4s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0920 17:02:21.526132   15539 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (32.3983244s)
functional_test.go:663: soft start took 32.398881617s for "minikube" cluster.
I0920 17:02:53.924948   15539 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (32.40s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.82s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.815637473s)
functional_test.go:761: restart took 38.815750945s for "minikube" cluster.
I0920 17:03:33.055578   15539 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (38.82s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.78s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd833143806/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.82s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.75s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (152.589182ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://10.138.0.48:31528 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.75s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (42.709945ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (41.866317ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2024/09/20 17:03:46 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 50326: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.53s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (94.120878ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-8660/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8660/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:03:46.315976   50699 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:03:46.316093   50699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:03:46.316103   50699 out.go:358] Setting ErrFile to fd 2...
	I0920 17:03:46.316107   50699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:03:46.316296   50699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8660/.minikube/bin
	I0920 17:03:46.316882   50699 out.go:352] Setting JSON to false
	I0920 17:03:46.317886   50699 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2778,"bootTime":1726849048,"procs":257,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:03:46.317981   50699 start.go:139] virtualization: kvm guest
	I0920 17:03:46.320475   50699 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0920 17:03:46.322203   50699 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19672-8660/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 17:03:46.322243   50699 notify.go:220] Checking for updates...
	I0920 17:03:46.322253   50699 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 17:03:46.323895   50699 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:03:46.325347   50699 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8660/kubeconfig
	I0920 17:03:46.326797   50699 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8660/.minikube
	I0920 17:03:46.328294   50699 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:03:46.329843   50699 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:03:46.331799   50699 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:03:46.332271   50699 exec_runner.go:51] Run: systemctl --version
	I0920 17:03:46.335177   50699 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:03:46.360484   50699 out.go:177] * Using the none driver based on existing profile
	I0920 17:03:46.361871   50699 start.go:297] selected driver: none
	I0920 17:03:46.361889   50699 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:03:46.362037   50699 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:03:46.362065   50699 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0920 17:03:46.362515   50699 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0920 17:03:46.364791   50699 out.go:201] 
	W0920 17:03:46.365997   50699 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 17:03:46.367136   50699 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (79.661602ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-8660/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8660/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:03:46.487766   50729 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:03:46.487881   50729 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:03:46.487889   50729 out.go:358] Setting ErrFile to fd 2...
	I0920 17:03:46.487893   50729 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:03:46.488133   50729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-8660/.minikube/bin
	I0920 17:03:46.488619   50729 out.go:352] Setting JSON to false
	I0920 17:03:46.489616   50729 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2778,"bootTime":1726849048,"procs":257,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:03:46.489712   50729 start.go:139] virtualization: kvm guest
	I0920 17:03:46.492131   50729 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	W0920 17:03:46.493912   50729 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19672-8660/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 17:03:46.493953   50729 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 17:03:46.493959   50729 notify.go:220] Checking for updates...
	I0920 17:03:46.495401   50729 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:03:46.497062   50729 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-8660/kubeconfig
	I0920 17:03:46.498753   50729 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8660/.minikube
	I0920 17:03:46.500464   50729 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:03:46.502065   50729 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:03:46.503899   50729 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:03:46.504160   50729 exec_runner.go:51] Run: systemctl --version
	I0920 17:03:46.506723   50729 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:03:46.517337   50729 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0920 17:03:46.518646   50729 start.go:297] selected driver: none
	I0920 17:03:46.518660   50729 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:03:46.518775   50729 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:03:46.518801   50729 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0920 17:03:46.519121   50729 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0920 17:03:46.522119   50729 out.go:201] 
	W0920 17:03:46.523686   50729 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 17:03:46.525049   50729 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "150.898924ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "43.84736ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "157.45172ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "44.165145ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-pwv2v" [4dead372-5310-4188-9a70-96e53550fab2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-pwv2v" [4dead372-5310-4188-9a70-96e53550fab2] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003612676s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1494: Took "319.982111ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://10.138.0.48:30873
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://10.138.0.48:30873
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-6hzwd" [47069450-2cce-497d-a7d1-15d588420f13] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-6hzwd" [47069450-2cce-497d-a7d1-15d588420f13] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003110829s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://10.138.0.48:31413
functional_test.go:1675: http://10.138.0.48:31413: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-6hzwd

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.138.0.48:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.138.0.48:31413
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.29s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (21.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4a361275-5b19-4050-a38c-d96de026dc40] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.002949744s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5c5b4d37-6c59-4fa9-a957-aa11fc70f9fa] Pending
helpers_test.go:344: "sp-pod" [5c5b4d37-6c59-4fa9-a957-aa11fc70f9fa] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5c5b4d37-6c59-4fa9-a957-aa11fc70f9fa] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004063358s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml: (1.2677073s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b7b6d77b-8915-49a7-b50d-2a20c6bca89f] Pending
helpers_test.go:344: "sp-pod" [b7b6d77b-8915-49a7-b50d-2a20c6bca89f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b7b6d77b-8915-49a7-b50d-2a20c6bca89f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003691949s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (21.95s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 52375: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context minikube apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [548f77c8-d6e0-4e37-86de-70763a0ac614] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [548f77c8-d6e0-4e37-86de-70763a0ac614] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.00308913s
I0920 17:04:37.586797   15539 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context minikube get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.210.148 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-xczxs" [53986732-8d31-4710-8410-308e1a79ed66] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-xczxs" [53986732-8d31-4710-8410-308e1a79ed66] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.003753942s
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-xczxs -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-xczxs -- mysql -ppassword -e "show databases;": exit status 1 (108.193026ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 17:04:56.072109   15539 retry.go:31] will retry after 1.399358648s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-xczxs -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-xczxs -- mysql -ppassword -e "show databases;": exit status 1 (124.572915ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 17:04:57.596540   15539 retry.go:31] will retry after 942.419001ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-xczxs -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.85s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (12.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (12.784541487s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (12.78s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (12.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (12.277986569s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (12.28s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (13.9s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (13.897855053s)
--- PASS: TestImageBuild/serial/Setup (13.90s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.54s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube: (1.539462637s)
--- PASS: TestImageBuild/serial/NormalBuild (1.54s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.85s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.85s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.57s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.57s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.57s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.57s)

                                                
                                    
x
+
TestJSONOutput/start/Command (26.28s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (26.278403732s)
--- PASS: TestJSONOutput/start/Command (26.28s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.47s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.47s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.38s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.38s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.41s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (10.413445212s)
--- PASS: TestJSONOutput/stop/Command (10.41s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.086661ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"18262cf4-b014-4f76-a28f-261d120e07f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"560da792-d8e6-4542-b459-91fedfc55f7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19672"}}
	{"specversion":"1.0","id":"fcc01d06-819f-4b78-91ff-b0b8026348db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c10c6c01-4994-4b51-b2fb-a523b6e237fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19672-8660/kubeconfig"}}
	{"specversion":"1.0","id":"5967c281-706f-4ce6-a959-f714c3f11abb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8660/.minikube"}}
	{"specversion":"1.0","id":"8ebe8d92-3112-4484-8ef6-1cc761afbc48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e2e3a67c-5766-48ee-93f9-efff10b376b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1f690cc7-ab0d-4346-a5cc-5b20b0e9bfa9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (34.26s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.179897398s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (18.356005067s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.163962545s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (34.26s)

                                                
                                    
x
+
TestPause/serial/Start (29.05s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (29.054637034s)
--- PASS: TestPause/serial/Start (29.05s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (25.46s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (25.46344458s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (25.46s)

                                                
                                    
x
+
TestPause/serial/Pause (0.49s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.49s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.12s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (123.626806ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.12s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.42s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.42s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.53s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.53s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.6s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.597072205s)
--- PASS: TestPause/serial/DeletePaused (1.60s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (67.17s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2452105481 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2452105481 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (28.966417845s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (34.678754572s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (3.003331157s)
--- PASS: TestRunningBinaryUpgrade (67.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (50.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3998475965 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3998475965 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (14.501877387s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3998475965 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3998475965 -p minikube stop: (23.621539487s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (12.259918751s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (50.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                    
x
+
TestKubernetesUpgrade (315.93s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (29.283865803s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.311371864s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (71.248168ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m17.398910938s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (66.446648ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-8660/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-8660/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (17.368669368s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.370204555s)
--- PASS: TestKubernetesUpgrade (315.93s)

                                                
                                    

Test skip (56/167)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.31.1/preload-exists 0
14 TestDownloadOnly/v1.31.1/cached-images 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Ingress 0
37 TestAddons/parallel/Olm 0
41 TestAddons/parallel/LocalPath 0
45 TestCertOptions 0
47 TestDockerFlags 0
48 TestForceSystemdFlag 0
49 TestForceSystemdEnv 0
50 TestDockerEnvContainerd 0
51 TestKVMDriverInstallOrUpdate 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
54 TestErrorSpam 0
63 TestFunctional/serial/CacheCmd 0
77 TestFunctional/parallel/MountCmd 0
100 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
101 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
102 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
104 TestFunctional/parallel/SSHCmd 0
105 TestFunctional/parallel/CpCmd 0
107 TestFunctional/parallel/FileSync 0
108 TestFunctional/parallel/CertSync 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/ImageCommands 0
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0
125 TestGvisorAddon 0
126 TestMultiControlPlane 0
134 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
161 TestKicCustomNetwork 0
162 TestKicExistingNetwork 0
163 TestKicCustomSubnet 0
164 TestKicStaticIP 0
167 TestMountStart 0
168 TestMultiNode 0
169 TestNetworkPlugins 0
170 TestNoKubernetes 0
171 TestChangeNoneUser 0
182 TestPreload 0
183 TestScheduledStopWindows 0
184 TestScheduledStopUnix 0
185 TestSkaffold 0
188 TestStartStop/group/old-k8s-version 0.12
189 TestStartStop/group/newest-cni 0.12
190 TestStartStop/group/default-k8s-diff-port 0.12
191 TestStartStop/group/no-preload 0.13
192 TestStartStop/group/disable-driver-mounts 0.13
193 TestStartStop/group/embed-certs 0.12
194 TestInsufficientStorage 0
201 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:194: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:916: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1041: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1717: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1760: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1924: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1955: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:458: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:545: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:292: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2016: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.13s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.12s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard