Test Report: none_Linux 19672

                    
                      d6d2a37830b251a8a712eec07ee86a534797346d:2024-09-20:36302
                    
                

Test fail (1/167)

Order failed test Duration
33 TestAddons/parallel/Registry 71.85
x
+
TestAddons/parallel/Registry (71.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.571808ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-h9zxc" [94a85633-fa9f-4487-8730-3b82acd43c17] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003529724s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-lkpsj" [260577bf-b43b-4e23-97b2-02d10adfa092] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0038581s
addons_test.go:338: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.080867199s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/09/20 21:00:28 [DEBUG] GET http://10.138.0.48:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:37117               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:48 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 20 Sep 24 20:48 UTC | 20 Sep 24 20:48 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 20 Sep 24 20:48 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 20 Sep 24 20:48 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 20 Sep 24 20:48 UTC | 20 Sep 24 20:50 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 20 Sep 24 20:51 UTC | 20 Sep 24 20:51 UTC |
	|         | volcano --alsologtostderr -v=1       |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 20 Sep 24 21:00 UTC | 20 Sep 24 21:00 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 20:48:54
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 20:48:54.731823   20180 out.go:345] Setting OutFile to fd 1 ...
	I0920 20:48:54.732106   20180 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:48:54.732118   20180 out.go:358] Setting ErrFile to fd 2...
	I0920 20:48:54.732125   20180 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:48:54.732329   20180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9477/.minikube/bin
	I0920 20:48:54.732918   20180 out.go:352] Setting JSON to false
	I0920 20:48:54.733832   20180 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1880,"bootTime":1726863455,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 20:48:54.733890   20180 start.go:139] virtualization: kvm guest
	I0920 20:48:54.736013   20180 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0920 20:48:54.737293   20180 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19672-9477/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 20:48:54.737340   20180 notify.go:220] Checking for updates...
	I0920 20:48:54.737362   20180 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 20:48:54.738763   20180 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 20:48:54.739911   20180 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9477/kubeconfig
	I0920 20:48:54.741291   20180 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9477/.minikube
	I0920 20:48:54.742634   20180 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 20:48:54.743981   20180 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 20:48:54.745418   20180 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 20:48:54.755901   20180 out.go:177] * Using the none driver based on user configuration
	I0920 20:48:54.766120   20180 start.go:297] selected driver: none
	I0920 20:48:54.766138   20180 start.go:901] validating driver "none" against <nil>
	I0920 20:48:54.766150   20180 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 20:48:54.766196   20180 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0920 20:48:54.766506   20180 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0920 20:48:54.767070   20180 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 20:48:54.767331   20180 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 20:48:54.767360   20180 cni.go:84] Creating CNI manager for ""
	I0920 20:48:54.767407   20180 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 20:48:54.767419   20180 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 20:48:54.767451   20180 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 20:48:54.768942   20180 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0920 20:48:54.770785   20180 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/config.json ...
	I0920 20:48:54.770814   20180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/config.json: {Name:mkc9bc0ce17452b3786f4c22062e0f8d94946f73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:54.770943   20180 start.go:360] acquireMachinesLock for minikube: {Name:mkf9700fb566525b72391541d3ef90c9358e650d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 20:48:54.770980   20180 start.go:364] duration metric: took 21.858µs to acquireMachinesLock for "minikube"
	I0920 20:48:54.770998   20180 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 20:48:54.771060   20180 start.go:125] createHost starting for "" (driver="none")
	I0920 20:48:54.773431   20180 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0920 20:48:54.774675   20180 exec_runner.go:51] Run: systemctl --version
	I0920 20:48:54.777223   20180 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0920 20:48:54.777263   20180 client.go:168] LocalClient.Create starting
	I0920 20:48:54.777358   20180 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9477/.minikube/certs/ca.pem
	I0920 20:48:54.777396   20180 main.go:141] libmachine: Decoding PEM data...
	I0920 20:48:54.777417   20180 main.go:141] libmachine: Parsing certificate...
	I0920 20:48:54.777492   20180 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9477/.minikube/certs/cert.pem
	I0920 20:48:54.777520   20180 main.go:141] libmachine: Decoding PEM data...
	I0920 20:48:54.777539   20180 main.go:141] libmachine: Parsing certificate...
	I0920 20:48:54.777985   20180 client.go:171] duration metric: took 712.213µs to LocalClient.Create
	I0920 20:48:54.778014   20180 start.go:167] duration metric: took 802.314µs to libmachine.API.Create "minikube"
	I0920 20:48:54.778024   20180 start.go:293] postStartSetup for "minikube" (driver="none")
	I0920 20:48:54.778072   20180 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 20:48:54.778130   20180 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 20:48:54.788460   20180 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 20:48:54.788480   20180 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 20:48:54.788489   20180 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 20:48:54.790360   20180 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0920 20:48:54.791539   20180 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9477/.minikube/addons for local assets ...
	I0920 20:48:54.791579   20180 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9477/.minikube/files for local assets ...
	I0920 20:48:54.791596   20180 start.go:296] duration metric: took 13.566765ms for postStartSetup
	I0920 20:48:54.792141   20180 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/config.json ...
	I0920 20:48:54.792260   20180 start.go:128] duration metric: took 21.192568ms to createHost
	I0920 20:48:54.792271   20180 start.go:83] releasing machines lock for "minikube", held for 21.280918ms
	I0920 20:48:54.792579   20180 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 20:48:54.792629   20180 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0920 20:48:54.794371   20180 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 20:48:54.794420   20180 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 20:48:54.803347   20180 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 20:48:54.803368   20180 start.go:495] detecting cgroup driver to use...
	I0920 20:48:54.803389   20180 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 20:48:54.803467   20180 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 20:48:54.820192   20180 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0920 20:48:54.829977   20180 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 20:48:54.839637   20180 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 20:48:54.839690   20180 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 20:48:54.847655   20180 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 20:48:54.857029   20180 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 20:48:54.865412   20180 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 20:48:54.873410   20180 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 20:48:54.881151   20180 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 20:48:54.889916   20180 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 20:48:54.898951   20180 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 20:48:54.907514   20180 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 20:48:54.914661   20180 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 20:48:54.922542   20180 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0920 20:48:55.138161   20180 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0920 20:48:55.204931   20180 start.go:495] detecting cgroup driver to use...
	I0920 20:48:55.204985   20180 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 20:48:55.205101   20180 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 20:48:55.224157   20180 exec_runner.go:51] Run: which cri-dockerd
	I0920 20:48:55.225048   20180 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 20:48:55.232688   20180 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0920 20:48:55.232711   20180 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0920 20:48:55.232740   20180 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0920 20:48:55.239830   20180 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0920 20:48:55.239956   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2618628913 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0920 20:48:55.247336   20180 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0920 20:48:55.465105   20180 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0920 20:48:55.686244   20180 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 20:48:55.686428   20180 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0920 20:48:55.686445   20180 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0920 20:48:55.686491   20180 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0920 20:48:55.695894   20180 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0920 20:48:55.696040   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube668087537 /etc/docker/daemon.json
	I0920 20:48:55.704282   20180 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0920 20:48:55.917476   20180 exec_runner.go:51] Run: sudo systemctl restart docker
	I0920 20:48:56.212394   20180 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0920 20:48:56.222755   20180 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0920 20:48:56.237285   20180 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 20:48:56.247742   20180 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0920 20:48:56.461712   20180 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0920 20:48:56.671792   20180 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0920 20:48:56.890919   20180 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0920 20:48:56.904298   20180 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 20:48:56.914810   20180 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0920 20:48:57.150948   20180 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0920 20:48:57.216627   20180 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0920 20:48:57.216707   20180 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0920 20:48:57.218029   20180 start.go:563] Will wait 60s for crictl version
	I0920 20:48:57.218064   20180 exec_runner.go:51] Run: which crictl
	I0920 20:48:57.218886   20180 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0920 20:48:57.247068   20180 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0920 20:48:57.247136   20180 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0920 20:48:57.267667   20180 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0920 20:48:57.290063   20180 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0920 20:48:57.290150   20180 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0920 20:48:57.292949   20180 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0920 20:48:57.294149   20180 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 20:48:57.294258   20180 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 20:48:57.294270   20180 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.1 docker true true} ...
	I0920 20:48:57.294358   20180 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0920 20:48:57.294407   20180 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0920 20:48:57.340662   20180 cni.go:84] Creating CNI manager for ""
	I0920 20:48:57.340687   20180 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 20:48:57.340702   20180 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 20:48:57.340722   20180 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 20:48:57.340886   20180 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.138.0.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-2"
	  kubeletExtraArgs:
	    node-ip: 10.138.0.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 20:48:57.340955   20180 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 20:48:57.349169   20180 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 20:48:57.349216   20180 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 20:48:57.358252   20180 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0920 20:48:57.358254   20180 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0920 20:48:57.358285   20180 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 20:48:57.358321   20180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9477/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 20:48:57.358347   20180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9477/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 20:48:57.358289   20180 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0920 20:48:57.370816   20180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9477/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 20:48:57.405134   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2135074387 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 20:48:57.408030   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1924790027 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 20:48:57.435132   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube114419049 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 20:48:57.499241   20180 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 20:48:57.507525   20180 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0920 20:48:57.507546   20180 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0920 20:48:57.507579   20180 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0920 20:48:57.516376   20180 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0920 20:48:57.516505   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2305222324 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0920 20:48:57.524120   20180 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0920 20:48:57.524137   20180 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0920 20:48:57.524167   20180 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0920 20:48:57.531313   20180 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 20:48:57.531432   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube422082734 /lib/systemd/system/kubelet.service
	I0920 20:48:57.538646   20180 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0920 20:48:57.538739   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3742626517 /var/tmp/minikube/kubeadm.yaml.new
	I0920 20:48:57.545952   20180 exec_runner.go:51] Run: grep 10.138.0.48	control-plane.minikube.internal$ /etc/hosts
	I0920 20:48:57.547147   20180 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0920 20:48:57.762774   20180 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0920 20:48:57.776587   20180 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube for IP: 10.138.0.48
	I0920 20:48:57.776611   20180 certs.go:194] generating shared ca certs ...
	I0920 20:48:57.776628   20180 certs.go:226] acquiring lock for ca certs: {Name:mk1d6196dbc1689b3628478a0c39c96ca2cfb8dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:57.776755   20180 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9477/.minikube/ca.key
	I0920 20:48:57.776794   20180 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9477/.minikube/proxy-client-ca.key
	I0920 20:48:57.776803   20180 certs.go:256] generating profile certs ...
	I0920 20:48:57.776854   20180 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/client.key
	I0920 20:48:57.776867   20180 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/client.crt with IP's: []
	I0920 20:48:57.923963   20180 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/client.crt ...
	I0920 20:48:57.923991   20180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/client.crt: {Name:mk88860d394f74c51eb6ce8b308d957fce763fee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:57.924143   20180 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/client.key ...
	I0920 20:48:57.924155   20180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/client.key: {Name:mk70b474052d33ba900a8a63ae147fa88926b935 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:57.924236   20180 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/apiserver.key.35c0634a
	I0920 20:48:57.924252   20180 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
	I0920 20:48:58.073293   20180 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
	I0920 20:48:58.073322   20180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mk39f9363908008685c9b4b09227e07812e5fb7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:58.073465   20180 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/apiserver.key.35c0634a ...
	I0920 20:48:58.073479   20180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mk088cecaa082d954c89f523dd8f0cee0ee4e606 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:58.073554   20180 certs.go:381] copying /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/apiserver.crt
	I0920 20:48:58.073652   20180 certs.go:385] copying /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/apiserver.key
	I0920 20:48:58.073707   20180 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/proxy-client.key
	I0920 20:48:58.073720   20180 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0920 20:48:58.169907   20180 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/proxy-client.crt ...
	I0920 20:48:58.169936   20180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/proxy-client.crt: {Name:mk38f82ddc9a7f07e6396525e685b8dd38ecef11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:58.170076   20180 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/proxy-client.key ...
	I0920 20:48:58.170090   20180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/proxy-client.key: {Name:mk031c3fe36dff123c932b5e7c780e82e1def28a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:58.170251   20180 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9477/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 20:48:58.170283   20180 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9477/.minikube/certs/ca.pem (1078 bytes)
	I0920 20:48:58.170306   20180 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9477/.minikube/certs/cert.pem (1123 bytes)
	I0920 20:48:58.170329   20180 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9477/.minikube/certs/key.pem (1675 bytes)
	I0920 20:48:58.170899   20180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9477/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 20:48:58.171014   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1713794039 /var/lib/minikube/certs/ca.crt
	I0920 20:48:58.179515   20180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9477/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 20:48:58.179617   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4016040475 /var/lib/minikube/certs/ca.key
	I0920 20:48:58.187092   20180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9477/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 20:48:58.187207   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3960471729 /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 20:48:58.194354   20180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9477/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 20:48:58.194448   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3030144724 /var/lib/minikube/certs/proxy-client-ca.key
	I0920 20:48:58.201715   20180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0920 20:48:58.201817   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3007035077 /var/lib/minikube/certs/apiserver.crt
	I0920 20:48:58.209289   20180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 20:48:58.209389   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube162044716 /var/lib/minikube/certs/apiserver.key
	I0920 20:48:58.216987   20180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 20:48:58.217083   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2325120447 /var/lib/minikube/certs/proxy-client.crt
	I0920 20:48:58.224472   20180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9477/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 20:48:58.224597   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2687605920 /var/lib/minikube/certs/proxy-client.key
	I0920 20:48:58.232141   20180 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0920 20:48:58.232157   20180 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0920 20:48:58.232185   20180 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0920 20:48:58.239289   20180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9477/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 20:48:58.239416   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube433193939 /usr/share/ca-certificates/minikubeCA.pem
	I0920 20:48:58.246560   20180 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 20:48:58.246662   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2452136042 /var/lib/minikube/kubeconfig
	I0920 20:48:58.254606   20180 exec_runner.go:51] Run: openssl version
	I0920 20:48:58.257178   20180 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 20:48:58.264902   20180 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 20:48:58.266234   20180 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 20:48:58.266267   20180 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 20:48:58.268867   20180 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 20:48:58.276058   20180 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 20:48:58.277096   20180 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 20:48:58.277142   20180 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 20:48:58.277254   20180 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 20:48:58.291940   20180 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 20:48:58.299534   20180 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 20:48:58.306888   20180 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0920 20:48:58.326413   20180 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 20:48:58.334289   20180 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 20:48:58.334307   20180 kubeadm.go:157] found existing configuration files:
	
	I0920 20:48:58.334341   20180 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 20:48:58.341740   20180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 20:48:58.341780   20180 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 20:48:58.348835   20180 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 20:48:58.357142   20180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 20:48:58.357180   20180 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 20:48:58.363861   20180 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 20:48:58.413600   20180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 20:48:58.413693   20180 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 20:48:58.421260   20180 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 20:48:58.428722   20180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 20:48:58.428766   20180 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 20:48:58.435748   20180 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 20:48:58.465965   20180 kubeadm.go:310] W0920 20:48:58.465858   21056 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 20:48:58.466421   20180 kubeadm.go:310] W0920 20:48:58.466370   21056 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 20:48:58.467888   20180 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 20:48:58.467938   20180 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 20:48:58.560728   20180 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 20:48:58.560834   20180 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 20:48:58.560847   20180 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 20:48:58.560854   20180 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 20:48:58.570530   20180 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 20:48:58.573372   20180 out.go:235]   - Generating certificates and keys ...
	I0920 20:48:58.573412   20180 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 20:48:58.573426   20180 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 20:48:58.698470   20180 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 20:48:59.055617   20180 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 20:48:59.200841   20180 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 20:48:59.317020   20180 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 20:48:59.471007   20180 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 20:48:59.471166   20180 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0920 20:48:59.614666   20180 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 20:48:59.614792   20180 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0920 20:48:59.873414   20180 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 20:49:00.003158   20180 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 20:49:00.166154   20180 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 20:49:00.166294   20180 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 20:49:00.398511   20180 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 20:49:00.782639   20180 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 20:49:00.958242   20180 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 20:49:01.138387   20180 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 20:49:01.256933   20180 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 20:49:01.258059   20180 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 20:49:01.260233   20180 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 20:49:01.262332   20180 out.go:235]   - Booting up control plane ...
	I0920 20:49:01.262352   20180 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 20:49:01.262368   20180 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 20:49:01.262885   20180 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 20:49:01.283211   20180 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 20:49:01.287552   20180 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 20:49:01.287583   20180 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 20:49:01.531980   20180 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 20:49:01.532006   20180 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 20:49:02.033468   20180 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.485875ms
	I0920 20:49:02.033493   20180 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 20:49:06.535448   20180 kubeadm.go:310] [api-check] The API server is healthy after 4.501949606s
	I0920 20:49:06.545969   20180 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 20:49:06.557760   20180 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 20:49:06.574267   20180 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 20:49:06.574296   20180 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 20:49:06.582313   20180 kubeadm.go:310] [bootstrap-token] Using token: 3685jq.dvyml113fme7q15o
	I0920 20:49:06.583713   20180 out.go:235]   - Configuring RBAC rules ...
	I0920 20:49:06.583745   20180 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 20:49:06.587423   20180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 20:49:06.592723   20180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 20:49:06.595220   20180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 20:49:06.597448   20180 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 20:49:06.599678   20180 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 20:49:06.941105   20180 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 20:49:07.360974   20180 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 20:49:07.941239   20180 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 20:49:07.942150   20180 kubeadm.go:310] 
	I0920 20:49:07.942170   20180 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 20:49:07.942175   20180 kubeadm.go:310] 
	I0920 20:49:07.942180   20180 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 20:49:07.942184   20180 kubeadm.go:310] 
	I0920 20:49:07.942189   20180 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 20:49:07.942193   20180 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 20:49:07.942196   20180 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 20:49:07.942200   20180 kubeadm.go:310] 
	I0920 20:49:07.942203   20180 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 20:49:07.942221   20180 kubeadm.go:310] 
	I0920 20:49:07.942227   20180 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 20:49:07.942231   20180 kubeadm.go:310] 
	I0920 20:49:07.942235   20180 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 20:49:07.942239   20180 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 20:49:07.942244   20180 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 20:49:07.942259   20180 kubeadm.go:310] 
	I0920 20:49:07.942267   20180 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 20:49:07.942271   20180 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 20:49:07.942275   20180 kubeadm.go:310] 
	I0920 20:49:07.942279   20180 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3685jq.dvyml113fme7q15o \
	I0920 20:49:07.942282   20180 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fb1381ab3e8d15f0a6b8994a90c93d97d2e6ae809c49b3ec6993e5295be6567a \
	I0920 20:49:07.942285   20180 kubeadm.go:310] 	--control-plane 
	I0920 20:49:07.942288   20180 kubeadm.go:310] 
	I0920 20:49:07.942291   20180 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 20:49:07.942294   20180 kubeadm.go:310] 
	I0920 20:49:07.942296   20180 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3685jq.dvyml113fme7q15o \
	I0920 20:49:07.942299   20180 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fb1381ab3e8d15f0a6b8994a90c93d97d2e6ae809c49b3ec6993e5295be6567a 
	I0920 20:49:07.945099   20180 cni.go:84] Creating CNI manager for ""
	I0920 20:49:07.945120   20180 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 20:49:07.947027   20180 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 20:49:07.948344   20180 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0920 20:49:07.958317   20180 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 20:49:07.958437   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2473423273 /etc/cni/net.d/1-k8s.conflist
	I0920 20:49:07.968796   20180 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 20:49:07.968856   20180 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:49:07.968919   20180 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_09_20T20_49_07_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0920 20:49:07.977847   20180 ops.go:34] apiserver oom_adj: -16
	I0920 20:49:08.036376   20180 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:49:08.536914   20180 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:49:09.036469   20180 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:49:09.537460   20180 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:49:10.036923   20180 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:49:10.537385   20180 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:49:11.037234   20180 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:49:11.537386   20180 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:49:12.036828   20180 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:49:12.097927   20180 kubeadm.go:1113] duration metric: took 4.129125261s to wait for elevateKubeSystemPrivileges
	I0920 20:49:12.097961   20180 kubeadm.go:394] duration metric: took 13.820824797s to StartCluster
	I0920 20:49:12.097980   20180 settings.go:142] acquiring lock: {Name:mkffd6871e00198385cdf47f230b5743b288e4c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:49:12.098055   20180 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9477/kubeconfig
	I0920 20:49:12.098678   20180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9477/kubeconfig: {Name:mk42d63689d61c382c93256ce59e3b499a97143c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:49:12.098911   20180 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 20:49:12.098900   20180 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 20:49:12.099039   20180 addons.go:69] Setting yakd=true in profile "minikube"
	I0920 20:49:12.099059   20180 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0920 20:49:12.099066   20180 addons.go:234] Setting addon yakd=true in "minikube"
	I0920 20:49:12.099057   20180 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0920 20:49:12.099072   20180 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0920 20:49:12.099079   20180 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0920 20:49:12.099094   20180 host.go:66] Checking if "minikube" exists ...
	I0920 20:49:12.099106   20180 host.go:66] Checking if "minikube" exists ...
	I0920 20:49:12.099107   20180 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 20:49:12.099112   20180 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0920 20:49:12.099126   20180 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0920 20:49:12.099153   20180 host.go:66] Checking if "minikube" exists ...
	I0920 20:49:12.099160   20180 host.go:66] Checking if "minikube" exists ...
	I0920 20:49:12.099165   20180 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0920 20:49:12.099176   20180 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0920 20:49:12.099180   20180 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0920 20:49:12.099192   20180 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0920 20:49:12.099200   20180 host.go:66] Checking if "minikube" exists ...
	I0920 20:49:12.099262   20180 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0920 20:49:12.099276   20180 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0920 20:49:12.099299   20180 host.go:66] Checking if "minikube" exists ...
	I0920 20:49:12.099717   20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 20:49:12.099735   20180 api_server.go:166] Checking apiserver status ...
	I0920 20:49:12.099771   20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 20:49:12.099778   20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 20:49:12.099788   20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 20:49:12.099041   20180 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0920 20:49:12.099801   20180 api_server.go:166] Checking apiserver status ...
	I0920 20:49:12.099811   20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 20:49:12.099827   20180 addons.go:69] Setting registry=true in profile "minikube"
	I0920 20:49:12.099792   20180 api_server.go:166] Checking apiserver status ...
	I0920 20:49:12.099837   20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 20:49:12.099840   20180 addons.go:234] Setting addon registry=true in "minikube"
	I0920 20:49:12.099858   20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 20:49:12.099859   20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 20:49:12.099813   20180 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0920 20:49:12.099049   20180 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0920 20:49:12.099872   20180 api_server.go:166] Checking apiserver status ...
	I0920 20:49:12.099067   20180 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0920 20:49:12.099159   20180 addons.go:69] Setting volcano=true in profile "minikube"
	I0920 20:49:12.099888   20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 20:49:12.099895   20180 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0920 20:49:12.099899   20180 addons.go:234] Setting addon volcano=true in "minikube"
	I0920 20:49:12.099884   20180 mustload.go:65] Loading cluster: minikube
	I0920 20:49:12.099913   20180 host.go:66] Checking if "minikube" exists ...
	I0920 20:49:12.099919   20180 host.go:66] Checking if "minikube" exists ...
	I0920 20:49:12.099829   20180 api_server.go:166] Checking apiserver status ...
	I0920 20:49:12.100072   20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 20:49:12.100079   20180 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 20:49:12.099897   20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 20:49:12.099899   20180 api_server.go:166] Checking apiserver status ...
	I0920 20:49:12.100474   20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 20:49:12.100495   20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 20:49:12.100506   20180 api_server.go:166] Checking apiserver status ...
	I0920 20:49:12.100524   20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 20:49:12.100531   20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 20:49:12.100539   20180 api_server.go:166] Checking apiserver status ...
	I0920 20:49:12.099053   20180 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0920 20:49:12.100617   20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 20:49:12.100722   20180 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0920 20:49:12.101820   20180 out.go:177] * Configuring local host environment ...
	I0920 20:49:12.099889   20180 host.go:66] Checking if "minikube" exists ...
	I0920 20:49:12.102587   20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 20:49:12.102711   20180 api_server.go:166] Checking apiserver status ...
	I0920 20:49:12.102712   20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 20:49:12.102727   20180 api_server.go:166] Checking apiserver status ...
	I0920 20:49:12.102749   20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 20:49:12.102766   20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 20:49:12.099867   20180 api_server.go:166] Checking apiserver status ...
	I0920 20:49:12.103261   20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 20:49:12.102862   20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 20:49:12.103337   20180 api_server.go:166] Checking apiserver status ...
	I0920 20:49:12.103374   20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 20:49:12.099864   20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0920 20:49:12.104569   20180 out.go:270] * 
	W0920 20:49:12.104619   20180 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0920 20:49:12.104628   20180 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0920 20:49:12.104634   20180 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0920 20:49:12.104649   20180 out.go:270] * 
	W0920 20:49:12.104696   20180 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0920 20:49:12.104708   20180 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0920 20:49:12.104716   20180 out.go:270] * 
	W0920 20:49:12.104742   20180 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0920 20:49:12.104753   20180 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0920 20:49:12.104759   20180 out.go:270] * 
	W0920 20:49:12.104765   20180 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0920 20:49:12.104791   20180 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 20:49:12.099863   20180 host.go:66] Checking if "minikube" exists ...
	I0920 20:49:12.106006   20180 out.go:177] * Verifying Kubernetes components...
	I0920 20:49:12.106220   20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 20:49:12.106238   20180 api_server.go:166] Checking apiserver status ...
	I0920 20:49:12.106296   20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 20:49:12.107574   20180 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0920 20:49:12.121564   20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
	I0920 20:49:12.121695   20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
	I0920 20:49:12.125190   20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
	I0920 20:49:12.126301   20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
	I0920 20:49:12.126627   20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
	I0920 20:49:12.129767   20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
	I0920 20:49:12.130768   20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
	I0920 20:49:12.132044   20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
	I0920 20:49:12.136645   20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
	I0920 20:49:12.138819   20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
	I0920 20:49:12.142974   20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
	I0920 20:49:12.143373   20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
	I0920 20:49:12.143456   20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
	I0920 20:49:12.143507   20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
	I0920 20:49:12.144748   20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
	I0920 20:49:12.144838   20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
	I0920 20:49:12.146715   20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
	I0920 20:49:12.147972   20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
	I0920 20:49:12.148020   20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
	I0920 20:49:12.157887   20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
	I0920 20:49:12.157942   20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
	I0920 20:49:12.158307   20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
	I0920 20:49:12.158352   20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
	I0920 20:49:12.158703   20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
	I0920 20:49:12.158801   20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
	I0920 20:49:12.160003   20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
	I0920 20:49:12.160067   20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
	I0920 20:49:12.160697   20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
	I0920 20:49:12.160746   20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
	I0920 20:49:12.161661   20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
	I0920 20:49:12.161707   20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
	I0920 20:49:12.162020   20180 api_server.go:204] freezer state: "THAWED"
	I0920 20:49:12.162047   20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 20:49:12.165685   20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
	I0920 20:49:12.165737   20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
	I0920 20:49:12.166128   20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
	I0920 20:49:12.166304   20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
	I0920 20:49:12.167414   20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 20:49:12.169259   20180 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 20:49:12.170553   20180 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 20:49:12.170582   20180 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 20:49:12.170720   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1302454244 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 20:49:12.176118   20180 api_server.go:204] freezer state: "THAWED"
	I0920 20:49:12.176142   20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 20:49:12.179131   20180 api_server.go:204] freezer state: "THAWED"
	I0920 20:49:12.179155   20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 20:49:12.179509   20180 api_server.go:204] freezer state: "THAWED"
	I0920 20:49:12.179528   20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 20:49:12.180235   20180 api_server.go:204] freezer state: "THAWED"
	I0920 20:49:12.180280   20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 20:49:12.184032   20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 20:49:12.184053   20180 host.go:66] Checking if "minikube" exists ...
	I0920 20:49:12.184133   20180 api_server.go:204] freezer state: "THAWED"
	I0920 20:49:12.184151   20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 20:49:12.184460   20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 20:49:12.185207   20180 api_server.go:204] freezer state: "THAWED"
	I0920 20:49:12.185224   20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 20:49:12.187057   20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 20:49:12.187924   20180 api_server.go:204] freezer state: "THAWED"
	I0920 20:49:12.187943   20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 20:49:12.188721   20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 20:49:12.188886   20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 20:49:12.189089   20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 20:49:12.189711   20180 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 20:49:12.189748   20180 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0920 20:49:12.190730   20180 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 20:49:12.191710   20180 api_server.go:204] freezer state: "THAWED"
	I0920 20:49:12.193237   20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 20:49:12.192010   20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 20:49:12.192452   20180 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 20:49:12.192511   20180 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 20:49:12.192769   20180 api_server.go:204] freezer state: "THAWED"
	I0920 20:49:12.193779   20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
	I0920 20:49:12.194508   20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
	I0920 20:49:12.197341   20180 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 20:49:12.197432   20180 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 20:49:12.197450   20180 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0920 20:49:12.197458   20180 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 20:49:12.197496   20180 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 20:49:12.197554   20180 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 20:49:12.197582   20180 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 20:49:12.197609   20180 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 20:49:12.197684   20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 20:49:12.197738   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2001933503 /etc/kubernetes/addons/ig-namespace.yaml
	I0920 20:49:12.197805   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube651718609 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 20:49:12.198100   20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 20:49:12.194401   20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
	I0920 20:49:12.198572   20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
	I0920 20:49:12.198671   20180 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 20:49:12.198697   20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 20:49:12.198822   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube260274205 /etc/kubernetes/addons/registry-rc.yaml
	I0920 20:49:12.199005   20180 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 20:49:12.199054   20180 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0920 20:49:12.199066   20180 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0920 20:49:12.199087   20180 host.go:66] Checking if "minikube" exists ...
	I0920 20:49:12.199120   20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 20:49:12.199701   20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 20:49:12.199715   20180 api_server.go:166] Checking apiserver status ...
	I0920 20:49:12.199715   20180 api_server.go:204] freezer state: "THAWED"
	I0920 20:49:12.199728   20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 20:49:12.199744   20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 20:49:12.200332   20180 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 20:49:12.200424   20180 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 20:49:12.200565   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2216114389 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 20:49:12.201418   20180 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 20:49:12.201438   20180 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 20:49:12.201504   20180 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0920 20:49:12.201536   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube999839601 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 20:49:12.203977   20180 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 20:49:12.204014   20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0920 20:49:12.204772   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2109802716 /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 20:49:12.204252   20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 20:49:12.205274   20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 20:49:12.205779   20180 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0920 20:49:12.205821   20180 host.go:66] Checking if "minikube" exists ...
	I0920 20:49:12.206519   20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 20:49:12.206537   20180 api_server.go:166] Checking apiserver status ...
	I0920 20:49:12.206570   20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 20:49:12.206585   20180 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 20:49:12.207755   20180 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 20:49:12.207880   20180 api_server.go:204] freezer state: "THAWED"
	I0920 20:49:12.207910   20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 20:49:12.215809   20180 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 20:49:12.215833   20180 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 20:49:12.215941   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3509991536 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 20:49:12.216958   20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 20:49:12.222408   20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
	I0920 20:49:12.222415   20180 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 20:49:12.222472   20180 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 20:49:12.226511   20180 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 20:49:12.226676   20180 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 20:49:12.226710   20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 20:49:12.226847   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2942876505 /etc/kubernetes/addons/deployment.yaml
	I0920 20:49:12.227043   20180 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 20:49:12.227077   20180 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 20:49:12.228009   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2548288831 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 20:49:12.234242   20180 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 20:49:12.234266   20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 20:49:12.234418   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2274237374 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 20:49:12.240569   20180 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 20:49:12.246071   20180 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 20:49:12.246100   20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 20:49:12.246104   20180 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 20:49:12.246225   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2483380018 /etc/kubernetes/addons/ig-role.yaml
	I0920 20:49:12.247487   20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 20:49:12.247775   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4100326101 /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 20:49:12.248014   20180 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 20:49:12.248244   20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
	I0920 20:49:12.248293   20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
	I0920 20:49:12.248404   20180 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 20:49:12.248422   20180 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 20:49:12.248525   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4293824614 /etc/kubernetes/addons/registry-svc.yaml
	I0920 20:49:12.248648   20180 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 20:49:12.248665   20180 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 20:49:12.249432   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4000255514 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 20:49:12.250098   20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 20:49:12.250841   20180 api_server.go:204] freezer state: "THAWED"
	I0920 20:49:12.250861   20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 20:49:12.251219   20180 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 20:49:12.253015   20180 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 20:49:12.254038   20180 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 20:49:12.254066   20180 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 20:49:12.254182   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1677312610 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 20:49:12.255607   20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 20:49:12.256763   20180 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 20:49:12.257961   20180 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 20:49:12.257990   20180 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 20:49:12.258103   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1538150707 /etc/kubernetes/addons/yakd-ns.yaml
	I0920 20:49:12.262679   20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 20:49:12.263870   20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
	I0920 20:49:12.268234   20180 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 20:49:12.273116   20180 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 20:49:12.273173   20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 20:49:12.273297   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1322574623 /etc/kubernetes/addons/registry-proxy.yaml
	I0920 20:49:12.273126   20180 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 20:49:12.273452   20180 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 20:49:12.273856   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2563382683 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 20:49:12.277361   20180 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 20:49:12.277383   20180 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 20:49:12.277498   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3876785540 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 20:49:12.277873   20180 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 20:49:12.277899   20180 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 20:49:12.278005   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4282540638 /etc/kubernetes/addons/yakd-sa.yaml
	I0920 20:49:12.278928   20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 20:49:12.280021   20180 api_server.go:204] freezer state: "THAWED"
	I0920 20:49:12.280048   20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 20:49:12.284158   20180 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 20:49:12.284187   20180 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 20:49:12.293523   20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 20:49:12.294441   20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 20:49:12.294786   20180 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 20:49:12.294812   20180 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 20:49:12.294938   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1650690386 /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 20:49:12.298233   20180 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 20:49:12.299601   20180 out.go:177]   - Using image docker.io/busybox:stable
	I0920 20:49:12.300899   20180 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 20:49:12.300932   20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 20:49:12.301060   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube637399561 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 20:49:12.302414   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2761356044 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 20:49:12.309076   20180 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 20:49:12.309062   20180 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 20:49:12.309117   20180 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 20:49:12.309121   20180 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 20:49:12.309241   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1567994081 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 20:49:12.309254   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1095153991 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 20:49:12.318900   20180 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 20:49:12.318932   20180 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 20:49:12.319756   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1833948798 /etc/kubernetes/addons/yakd-crb.yaml
	I0920 20:49:12.323963   20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 20:49:12.326747   20180 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 20:49:12.326781   20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 20:49:12.326983   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2697256451 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 20:49:12.330513   20180 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 20:49:12.330540   20180 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 20:49:12.330688   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube217740776 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 20:49:12.331647   20180 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 20:49:12.331676   20180 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 20:49:12.331807   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2046182493 /etc/kubernetes/addons/yakd-svc.yaml
	I0920 20:49:12.334048   20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
	I0920 20:49:12.334108   20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
	I0920 20:49:12.342080   20180 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 20:49:12.342109   20180 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 20:49:12.342229   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube614902354 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 20:49:12.347304   20180 api_server.go:204] freezer state: "THAWED"
	I0920 20:49:12.347331   20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 20:49:12.349550   20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 20:49:12.352513   20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 20:49:12.352557   20180 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 20:49:12.352570   20180 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0920 20:49:12.352577   20180 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0920 20:49:12.352615   20180 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0920 20:49:12.369524   20180 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 20:49:12.369558   20180 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 20:49:12.369702   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube847297876 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 20:49:12.372596   20180 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 20:49:12.372624   20180 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 20:49:12.372830   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3411551087 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 20:49:12.388771   20180 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 20:49:12.388807   20180 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 20:49:12.388936   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3895675179 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 20:49:12.390019   20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 20:49:12.414694   20180 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 20:49:12.414872   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3598296078 /etc/kubernetes/addons/storageclass.yaml
	I0920 20:49:12.415033   20180 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 20:49:12.415059   20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 20:49:12.415238   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2778301694 /etc/kubernetes/addons/yakd-dp.yaml
	I0920 20:49:12.430877   20180 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 20:49:12.430922   20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 20:49:12.431053   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1745240354 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 20:49:12.436658   20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 20:49:12.454128   20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 20:49:12.476866   20180 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 20:49:12.476910   20180 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 20:49:12.477056   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2050546734 /etc/kubernetes/addons/ig-crd.yaml
	I0920 20:49:12.489718   20180 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 20:49:12.489752   20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 20:49:12.489900   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube393167977 /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 20:49:12.548639   20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 20:49:12.551582   20180 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 20:49:12.551619   20180 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 20:49:12.551745   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1066337254 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 20:49:12.565087   20180 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0920 20:49:12.607605   20180 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 20:49:12.607649   20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 20:49:12.607792   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3528681884 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 20:49:12.670144   20180 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
	I0920 20:49:12.674496   20180 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
	I0920 20:49:12.674520   20180 node_ready.go:38] duration metric: took 4.343732ms for node "ubuntu-20-agent-2" to be "Ready" ...
	I0920 20:49:12.674530   20180 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 20:49:12.683673   20180 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-57rnw" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:12.732429   20180 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 20:49:12.732461   20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 20:49:12.732589   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1828190478 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 20:49:12.805333   20180 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 20:49:12.805373   20180 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 20:49:12.805517   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1273143737 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 20:49:12.819406   20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 20:49:13.019338   20180 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0920 20:49:13.072063   20180 addons.go:475] Verifying addon registry=true in "minikube"
	I0920 20:49:13.077935   20180 out.go:177] * Verifying registry addon...
	I0920 20:49:13.080656   20180 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 20:49:13.084338   20180 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 20:49:13.084358   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:13.518923   20180 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.064727035s)
	I0920 20:49:13.521474   20180 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0920 20:49:13.531215   20180 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0920 20:49:13.557133   20180 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.27815973s)
	I0920 20:49:13.561720   20180 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.237698784s)
	I0920 20:49:13.561753   20180 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0920 20:49:13.594197   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:13.610964   20180 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.062265934s)
	I0920 20:49:13.712924   20180 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.363330885s)
	I0920 20:49:14.091404   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:14.136946   20180 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.746871797s)
	W0920 20:49:14.136986   20180 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 20:49:14.137012   20180 retry.go:31] will retry after 255.414229ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 20:49:14.392716   20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 20:49:14.584659   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:14.690327   20180 pod_ready.go:103] pod "coredns-7c65d6cfc9-57rnw" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:15.085077   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:15.209847   20180 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (2.959700016s)
	I0920 20:49:15.548450   20180 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.728961066s)
	I0920 20:49:15.548568   20180 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0920 20:49:15.550458   20180 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 20:49:15.555275   20180 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 20:49:15.561596   20180 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 20:49:15.561624   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:15.599547   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:15.599945   20180 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.207182826s)
	I0920 20:49:16.060948   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:16.085165   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:16.560715   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:16.584823   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:17.059666   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:17.084960   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:17.190905   20180 pod_ready.go:103] pod "coredns-7c65d6cfc9-57rnw" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:17.560077   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:17.584168   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:18.060813   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:18.084907   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:18.560581   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:18.584830   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:19.060497   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:19.084109   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:19.206270   20180 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 20:49:19.206442   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4007279464 /var/lib/minikube/google_application_credentials.json
	I0920 20:49:19.207428   20180 pod_ready.go:103] pod "coredns-7c65d6cfc9-57rnw" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:19.216950   20180 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 20:49:19.217066   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3826963202 /var/lib/minikube/google_cloud_project
	I0920 20:49:19.226535   20180 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0920 20:49:19.226581   20180 host.go:66] Checking if "minikube" exists ...
	I0920 20:49:19.227122   20180 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 20:49:19.227140   20180 api_server.go:166] Checking apiserver status ...
	I0920 20:49:19.227168   20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 20:49:19.244046   20180 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/21492/cgroup
	I0920 20:49:19.254043   20180 api_server.go:182] apiserver freezer: "13:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae"
	I0920 20:49:19.254095   20180 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/47bfaead87737dda5aa8a33086e0010db14f9c2eb2329cdffe245227ee40aaae/freezer.state
	I0920 20:49:19.262672   20180 api_server.go:204] freezer state: "THAWED"
	I0920 20:49:19.262699   20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 20:49:19.267467   20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 20:49:19.267524   20180 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 20:49:19.372651   20180 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 20:49:19.414633   20180 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 20:49:19.476975   20180 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 20:49:19.477030   20180 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 20:49:19.477181   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1103847264 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 20:49:19.488862   20180 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 20:49:19.488906   20180 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 20:49:19.489009   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube412409584 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 20:49:19.497585   20180 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 20:49:19.497613   20180 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 20:49:19.497751   20180 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2712862313 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 20:49:19.506030   20180 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 20:49:19.562032   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:19.584064   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:20.034225   20180 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0920 20:49:20.035717   20180 out.go:177] * Verifying gcp-auth addon...
	I0920 20:49:20.037596   20180 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 20:49:20.039885   20180 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 20:49:20.141938   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:20.142511   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:20.190706   20180 pod_ready.go:93] pod "coredns-7c65d6cfc9-57rnw" in "kube-system" namespace has status "Ready":"True"
	I0920 20:49:20.190726   20180 pod_ready.go:82] duration metric: took 7.506965168s for pod "coredns-7c65d6cfc9-57rnw" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:20.190735   20180 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qgklq" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:20.194284   20180 pod_ready.go:93] pod "coredns-7c65d6cfc9-qgklq" in "kube-system" namespace has status "Ready":"True"
	I0920 20:49:20.194299   20180 pod_ready.go:82] duration metric: took 3.558748ms for pod "coredns-7c65d6cfc9-qgklq" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:20.194310   20180 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:20.197746   20180 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0920 20:49:20.197764   20180 pod_ready.go:82] duration metric: took 3.446689ms for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:20.197772   20180 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:20.562909   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:20.584261   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:21.059873   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:21.141452   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:21.202875   20180 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0920 20:49:21.202896   20180 pod_ready.go:82] duration metric: took 1.005117492s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:21.202905   20180 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:21.207001   20180 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0920 20:49:21.207024   20180 pod_ready.go:82] duration metric: took 4.111378ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:21.207033   20180 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-75wt4" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:21.387941   20180 pod_ready.go:93] pod "kube-proxy-75wt4" in "kube-system" namespace has status "Ready":"True"
	I0920 20:49:21.387962   20180 pod_ready.go:82] duration metric: took 180.923463ms for pod "kube-proxy-75wt4" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:21.387972   20180 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:21.558946   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:21.583689   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:21.787571   20180 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0920 20:49:21.787607   20180 pod_ready.go:82] duration metric: took 399.628497ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:21.787618   20180 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-9ml89" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:22.059764   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:22.084434   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:22.187919   20180 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-9ml89" in "kube-system" namespace has status "Ready":"True"
	I0920 20:49:22.187945   20180 pod_ready.go:82] duration metric: took 400.319835ms for pod "nvidia-device-plugin-daemonset-9ml89" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:22.187954   20180 pod_ready.go:39] duration metric: took 9.513412698s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 20:49:22.187975   20180 api_server.go:52] waiting for apiserver process to appear ...
	I0920 20:49:22.188089   20180 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 20:49:22.205820   20180 api_server.go:72] duration metric: took 10.100993289s to wait for apiserver process to appear ...
	I0920 20:49:22.205844   20180 api_server.go:88] waiting for apiserver healthz status ...
	I0920 20:49:22.205862   20180 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 20:49:22.210968   20180 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 20:49:22.211783   20180 api_server.go:141] control plane version: v1.31.1
	I0920 20:49:22.211807   20180 api_server.go:131] duration metric: took 5.95658ms to wait for apiserver health ...
	I0920 20:49:22.211814   20180 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 20:49:22.392952   20180 system_pods.go:59] 16 kube-system pods found
	I0920 20:49:22.392979   20180 system_pods.go:61] "coredns-7c65d6cfc9-57rnw" [b1133b0b-cc06-4311-9bb6-50af62e1e360] Running
	I0920 20:49:22.392987   20180 system_pods.go:61] "csi-hostpath-attacher-0" [98bcd33a-b03a-418f-b92f-e7b81e582a80] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 20:49:22.392994   20180 system_pods.go:61] "csi-hostpath-resizer-0" [636162ab-04cb-4555-98d8-a66270a4f1da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 20:49:22.393001   20180 system_pods.go:61] "csi-hostpathplugin-mk5k4" [2a54156b-f734-45c4-aa12-19769dd0e1a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 20:49:22.393005   20180 system_pods.go:61] "etcd-ubuntu-20-agent-2" [196b76f8-5e80-4f2d-b234-4683af81fe5f] Running
	I0920 20:49:22.393012   20180 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [eda804e9-918a-4243-8b3f-4fff2ded7153] Running
	I0920 20:49:22.393018   20180 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [72170d6b-2bba-49ce-a567-b95e08521cca] Running
	I0920 20:49:22.393024   20180 system_pods.go:61] "kube-proxy-75wt4" [87279c83-98d0-4c21-8df6-af13deac9832] Running
	I0920 20:49:22.393029   20180 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [5bd6f614-0faa-4b39-b5e3-719590323564] Running
	I0920 20:49:22.393038   20180 system_pods.go:61] "metrics-server-84c5f94fbc-r8fg4" [c1ae637f-e27e-48fe-96fb-249357137ba1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 20:49:22.393048   20180 system_pods.go:61] "nvidia-device-plugin-daemonset-9ml89" [3c92ac5c-2c50-4c61-ab43-ddb84a8f39c1] Running
	I0920 20:49:22.393055   20180 system_pods.go:61] "registry-66c9cd494c-h9zxc" [94a85633-fa9f-4487-8730-3b82acd43c17] Running
	I0920 20:49:22.393062   20180 system_pods.go:61] "registry-proxy-lkpsj" [260577bf-b43b-4e23-97b2-02d10adfa092] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 20:49:22.393074   20180 system_pods.go:61] "snapshot-controller-56fcc65765-ddrnq" [21dc34c7-3e6b-401e-aa65-5066383310dd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 20:49:22.393087   20180 system_pods.go:61] "snapshot-controller-56fcc65765-kgmwp" [f2a0abdd-2a42-402f-9fd2-47318fb4e02d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 20:49:22.393096   20180 system_pods.go:61] "storage-provisioner" [c68d692e-6601-405b-a2c3-f181a8053b18] Running
	I0920 20:49:22.393108   20180 system_pods.go:74] duration metric: took 181.285588ms to wait for pod list to return data ...
	I0920 20:49:22.393120   20180 default_sa.go:34] waiting for default service account to be created ...
	I0920 20:49:22.559499   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:22.584542   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:22.587313   20180 default_sa.go:45] found service account: "default"
	I0920 20:49:22.587341   20180 default_sa.go:55] duration metric: took 194.213859ms for default service account to be created ...
	I0920 20:49:22.587351   20180 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 20:49:22.794770   20180 system_pods.go:86] 16 kube-system pods found
	I0920 20:49:22.794802   20180 system_pods.go:89] "coredns-7c65d6cfc9-57rnw" [b1133b0b-cc06-4311-9bb6-50af62e1e360] Running
	I0920 20:49:22.794811   20180 system_pods.go:89] "csi-hostpath-attacher-0" [98bcd33a-b03a-418f-b92f-e7b81e582a80] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 20:49:22.794819   20180 system_pods.go:89] "csi-hostpath-resizer-0" [636162ab-04cb-4555-98d8-a66270a4f1da] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 20:49:22.794829   20180 system_pods.go:89] "csi-hostpathplugin-mk5k4" [2a54156b-f734-45c4-aa12-19769dd0e1a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 20:49:22.794838   20180 system_pods.go:89] "etcd-ubuntu-20-agent-2" [196b76f8-5e80-4f2d-b234-4683af81fe5f] Running
	I0920 20:49:22.794844   20180 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [eda804e9-918a-4243-8b3f-4fff2ded7153] Running
	I0920 20:49:22.794854   20180 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [72170d6b-2bba-49ce-a567-b95e08521cca] Running
	I0920 20:49:22.794861   20180 system_pods.go:89] "kube-proxy-75wt4" [87279c83-98d0-4c21-8df6-af13deac9832] Running
	I0920 20:49:22.794870   20180 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [5bd6f614-0faa-4b39-b5e3-719590323564] Running
	I0920 20:49:22.794879   20180 system_pods.go:89] "metrics-server-84c5f94fbc-r8fg4" [c1ae637f-e27e-48fe-96fb-249357137ba1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 20:49:22.794887   20180 system_pods.go:89] "nvidia-device-plugin-daemonset-9ml89" [3c92ac5c-2c50-4c61-ab43-ddb84a8f39c1] Running
	I0920 20:49:22.794893   20180 system_pods.go:89] "registry-66c9cd494c-h9zxc" [94a85633-fa9f-4487-8730-3b82acd43c17] Running
	I0920 20:49:22.794903   20180 system_pods.go:89] "registry-proxy-lkpsj" [260577bf-b43b-4e23-97b2-02d10adfa092] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 20:49:22.794912   20180 system_pods.go:89] "snapshot-controller-56fcc65765-ddrnq" [21dc34c7-3e6b-401e-aa65-5066383310dd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 20:49:22.794928   20180 system_pods.go:89] "snapshot-controller-56fcc65765-kgmwp" [f2a0abdd-2a42-402f-9fd2-47318fb4e02d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 20:49:22.794938   20180 system_pods.go:89] "storage-provisioner" [c68d692e-6601-405b-a2c3-f181a8053b18] Running
	I0920 20:49:22.794947   20180 system_pods.go:126] duration metric: took 207.589673ms to wait for k8s-apps to be running ...
	I0920 20:49:22.794960   20180 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 20:49:22.795013   20180 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0920 20:49:22.811727   20180 system_svc.go:56] duration metric: took 16.7576ms WaitForService to wait for kubelet
	I0920 20:49:22.811753   20180 kubeadm.go:582] duration metric: took 10.706935564s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 20:49:22.811777   20180 node_conditions.go:102] verifying NodePressure condition ...
	I0920 20:49:22.988476   20180 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0920 20:49:22.988504   20180 node_conditions.go:123] node cpu capacity is 8
	I0920 20:49:22.988518   20180 node_conditions.go:105] duration metric: took 176.735395ms to run NodePressure ...
	I0920 20:49:22.988532   20180 start.go:241] waiting for startup goroutines ...
	I0920 20:49:23.142176   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:23.142716   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:23.559072   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:23.583870   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:24.059407   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:24.084140   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:24.559979   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:24.585021   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:25.059282   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:25.084054   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:25.559686   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:25.584865   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:26.059722   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:26.084214   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:26.559876   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:26.583788   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:27.141240   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:27.142011   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:27.558537   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:27.584205   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:28.059625   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:28.084440   20180 kapi.go:107] duration metric: took 15.003786275s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 20:49:28.559950   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:29.059527   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:29.559576   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:30.059545   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:30.559554   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:31.059522   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:31.559251   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:32.060076   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:32.560020   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:33.059567   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:33.560146   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:34.060619   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:34.559652   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:35.143121   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:35.560456   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:36.060470   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:36.642344   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:37.059474   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:37.560679   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:38.060023   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:38.560089   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:39.059921   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:39.560644   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:40.059638   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:40.559556   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:41.059296   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:41.558907   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:42.059996   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:42.560097   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:43.059593   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:43.559470   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:44.059934   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:44.559450   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:45.060371   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:45.559584   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:46.060294   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:46.559498   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:47.059608   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:47.558665   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:48.059700   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:48.559600   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:49.060332   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:49.559405   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:50.059518   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:50.559382   20180 kapi.go:107] duration metric: took 35.004106999s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 20:50:01.540874   20180 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 20:50:01.540897   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:02.040748   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:02.540823   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:03.040325   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:03.540759   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:04.040609   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:04.540106   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:05.040623   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:05.540641   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:06.040897   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:06.541029   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:07.040939   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:07.540655   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:08.040731   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:08.540507   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:09.040493   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:09.540219   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:10.041262   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:10.541109   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:11.040996   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:11.541086   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:12.041417   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:12.540882   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:13.040422   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:13.540780   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:14.040500   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:14.540390   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:15.040461   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:15.540641   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:16.040207   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:16.541485   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:17.040612   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:17.540798   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:18.040321   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:18.541874   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:19.040682   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:19.540638   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:20.040435   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:20.541453   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:21.041455   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:21.540702   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:22.040941   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:22.541300   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:23.041168   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:23.541850   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:24.041012   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:24.541224   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:25.040713   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:25.540492   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:26.040396   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:26.541688   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:27.040702   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:27.540456   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:28.041459   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:28.541275   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:29.041115   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:29.541509   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:30.040443   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:30.540322   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:31.041473   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:31.541749   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:32.040634   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:32.540694   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:33.040331   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:33.541745   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:34.040242   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:34.540854   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:35.041111   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:35.541616   20180 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:36.040825   20180 kapi.go:107] duration metric: took 1m16.003230373s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 20:50:36.042199   20180 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0920 20:50:36.043734   20180 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 20:50:36.045036   20180 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 20:50:36.046454   20180 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, cloud-spanner, yakd, storage-provisioner, metrics-server, inspektor-gadget, storage-provisioner-rancher, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0920 20:50:36.047753   20180 addons.go:510] duration metric: took 1m23.948838661s for enable addons: enabled=[nvidia-device-plugin default-storageclass cloud-spanner yakd storage-provisioner metrics-server inspektor-gadget storage-provisioner-rancher volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0920 20:50:36.047793   20180 start.go:246] waiting for cluster config update ...
	I0920 20:50:36.047812   20180 start.go:255] writing updated cluster config ...
	I0920 20:50:36.048058   20180 exec_runner.go:51] Run: rm -f paused
	I0920 20:50:36.092790   20180 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 20:50:36.094745   20180 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Sun 2024-08-11 20:18:06 UTC, end at Fri 2024-09-20 21:00:29 UTC. --
	Sep 20 20:52:53 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:52:53.962558882Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 20 20:52:53 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:52:53.962561057Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 20 20:52:53 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:52:53.962604927Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 20 20:52:53 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:52:53.965715963Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 20 20:52:53 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:52:53.966637940Z" level=error msg="Error running exec 6e113b963b231d7177e751d503342850c5e02ae2f42429ba0af4a6e155022557 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" spanID=e95e76e34ea70200 traceID=9402058185d7e293cfd6ef2ba267e970
	Sep 20 20:52:53 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:52:53.967401011Z" level=error msg="Error running exec fada8c174c17ace1cd65a5baf88ebbdef8d1c882eb02eab9b011cace3a00e0b3 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" spanID=527159cc85c876eb traceID=9d1a497f017f2b008ea31e5608e80e16
	Sep 20 20:52:54 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:52:54.088003253Z" level=info msg="ignoring event" container=fd90c5fca3ff49e78f373fb6cccc4060cccd9afab8a3bd4285e7f722037e889d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 20:54:10 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:54:10.523337119Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=d9fe50539ac6f93b traceID=f768be1f70494f63a79136bfd6868a6d
	Sep 20 20:54:10 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:54:10.525602317Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=d9fe50539ac6f93b traceID=f768be1f70494f63a79136bfd6868a6d
	Sep 20 20:55:43 ubuntu-20-agent-2 cri-dockerd[20726]: time="2024-09-20T20:55:43Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 20 20:55:45 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:55:45.025080059Z" level=info msg="ignoring event" container=c9953521c9f2f80ce46f7971e85bc1a98d4eb3d6048c565b569c5e8d1e1b8798 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 20:56:57 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:56:57.514686543Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=5003dc7d3b6ce8a2 traceID=966b363656487a737fd4a8841e1e1915
	Sep 20 20:56:57 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:56:57.516779578Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=5003dc7d3b6ce8a2 traceID=966b363656487a737fd4a8841e1e1915
	Sep 20 20:59:28 ubuntu-20-agent-2 cri-dockerd[20726]: time="2024-09-20T20:59:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b0173f138bc1cbd539d8594e12112c29ea1f83fe0f7638f1d996543fe1cb6223/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 20 20:59:28 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:59:28.813227584Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=97651ed730d07bac traceID=baf66a42bdb0bae3a9f03742c7abca9e
	Sep 20 20:59:28 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:59:28.815384051Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=97651ed730d07bac traceID=baf66a42bdb0bae3a9f03742c7abca9e
	Sep 20 20:59:41 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:59:41.512570585Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=13bb970a5ed7d6cf traceID=8ce131ae86e7b37817f8fa0c5630040d
	Sep 20 20:59:41 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T20:59:41.514642562Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=13bb970a5ed7d6cf traceID=8ce131ae86e7b37817f8fa0c5630040d
	Sep 20 21:00:07 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T21:00:07.526057946Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=5580e70f5399aa26 traceID=51dea950d9bdf9d58e8e02bc2e9d9896
	Sep 20 21:00:07 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T21:00:07.527996265Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=5580e70f5399aa26 traceID=51dea950d9bdf9d58e8e02bc2e9d9896
	Sep 20 21:00:28 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T21:00:28.280380779Z" level=info msg="ignoring event" container=b0173f138bc1cbd539d8594e12112c29ea1f83fe0f7638f1d996543fe1cb6223 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:00:28 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T21:00:28.534500118Z" level=info msg="ignoring event" container=0b9550198c4f96b96b5dc2c116f1639f353a52dde65329f370ee6034a34578d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:00:28 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T21:00:28.595581808Z" level=info msg="ignoring event" container=d1396593cca733b6117d9ab7c080b88d501b9bd6f43afc8c16f73e10c030a92f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:00:28 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T21:00:28.671792408Z" level=info msg="ignoring event" container=af9dabc4e6d9a9c8461ee74356ed9ac51541a5ed6cc15d402552e32183280c48 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 21:00:28 ubuntu-20-agent-2 dockerd[20396]: time="2024-09-20T21:00:28.762599246Z" level=info msg="ignoring event" container=9d38c4f2f625c2dd96754658624817af16824e476c060af8268bb91d095d16e4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	c9953521c9f2f       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            4 minutes ago       Exited              gadget                                   6                   0f5e1a1a31de5       gadget-lx8nd
	b354a4ea6d705       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   cd4ad8b11b987       gcp-auth-89d5ffd79-6krl6
	ebc759a9198ae       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          10 minutes ago      Running             csi-snapshotter                          0                   a83271d199de3       csi-hostpathplugin-mk5k4
	635f7e156c847       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          10 minutes ago      Running             csi-provisioner                          0                   a83271d199de3       csi-hostpathplugin-mk5k4
	85a852bb542fd       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            10 minutes ago      Running             liveness-probe                           0                   a83271d199de3       csi-hostpathplugin-mk5k4
	440b83db0b0d2       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           10 minutes ago      Running             hostpath                                 0                   a83271d199de3       csi-hostpathplugin-mk5k4
	e89cbc4018197       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                10 minutes ago      Running             node-driver-registrar                    0                   a83271d199de3       csi-hostpathplugin-mk5k4
	8d00a07f2e450       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   10 minutes ago      Running             csi-external-health-monitor-controller   0                   a83271d199de3       csi-hostpathplugin-mk5k4
	80b6797aac0a4       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              10 minutes ago      Running             csi-resizer                              0                   b93b6f67cb3c7       csi-hostpath-resizer-0
	086d99ee270a7       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             10 minutes ago      Running             csi-attacher                             0                   4b2bdebbbd96b       csi-hostpath-attacher-0
	6ca19f9fa10ec       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   0403f9d19821a       snapshot-controller-56fcc65765-ddrnq
	b89b630027d5b       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   b213ac584f018       snapshot-controller-56fcc65765-kgmwp
	748c7cb78fa6d       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       10 minutes ago      Running             local-path-provisioner                   0                   d8a9eaaea6174       local-path-provisioner-86d989889c-dw98n
	52f90e6f068aa       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        10 minutes ago      Running             yakd                                     0                   ccf9744afe5db       yakd-dashboard-67d98fc6b-q69lb
	39813f2eae876       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        10 minutes ago      Running             metrics-server                           0                   2305dc05cbc0e       metrics-server-84c5f94fbc-r8fg4
	a1749f4e50828       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               11 minutes ago      Running             cloud-spanner-emulator                   0                   dff843451ce46       cloud-spanner-emulator-769b77f747-ndkcz
	708e6656f04e2       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     11 minutes ago      Running             nvidia-device-plugin-ctr                 0                   8498f9d26f31e       nvidia-device-plugin-daemonset-9ml89
	de5fb063acdb4       6e38f40d628db                                                                                                                                11 minutes ago      Running             storage-provisioner                      0                   818dd6b99d02c       storage-provisioner
	f12058db504ad       c69fa2e9cbf5f                                                                                                                                11 minutes ago      Running             coredns                                  0                   7928101a55ffb       coredns-7c65d6cfc9-57rnw
	6f9a1d6bb9e44       60c005f310ff3                                                                                                                                11 minutes ago      Running             kube-proxy                               0                   75fdba180bb92       kube-proxy-75wt4
	cbd580b353bf3       9aa1fad941575                                                                                                                                11 minutes ago      Running             kube-scheduler                           0                   20ae3a9930237       kube-scheduler-ubuntu-20-agent-2
	b48b9c8f5139d       2e96e5913fc06                                                                                                                                11 minutes ago      Running             etcd                                     0                   ce4bb8805e454       etcd-ubuntu-20-agent-2
	47bfaead87737       6bab7719df100                                                                                                                                11 minutes ago      Running             kube-apiserver                           0                   bf5d95c8dd763       kube-apiserver-ubuntu-20-agent-2
	221d32dd9bb7b       175ffd71cce3d                                                                                                                                11 minutes ago      Running             kube-controller-manager                  0                   c5fda72bf74f5       kube-controller-manager-ubuntu-20-agent-2
	
	
	==> coredns [f12058db504a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	[INFO] Reloading complete
	[INFO] 127.0.0.1:40160 - 41255 "HINFO IN 7543578608229357731.3542518416492414400. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021270791s
	[INFO] 10.244.0.23:60994 - 17503 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000314374s
	[INFO] 10.244.0.23:33872 - 20952 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000189485s
	[INFO] 10.244.0.23:42065 - 7217 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000164885s
	[INFO] 10.244.0.23:53936 - 5860 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000120965s
	[INFO] 10.244.0.23:57923 - 55414 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0001517s
	[INFO] 10.244.0.23:59125 - 36506 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000158984s
	[INFO] 10.244.0.23:57950 - 19416 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.002021348s
	[INFO] 10.244.0.23:40576 - 14776 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004461841s
	[INFO] 10.244.0.23:36596 - 63525 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003348995s
	[INFO] 10.244.0.23:34053 - 9158 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004554138s
	[INFO] 10.244.0.23:33644 - 41169 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.002420875s
	[INFO] 10.244.0.23:54774 - 17567 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004724108s
	[INFO] 10.244.0.23:47025 - 21802 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001417634s
	[INFO] 10.244.0.23:44966 - 6973 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001430901s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T20_49_07_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-2
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-2"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 20:49:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:00:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 20:56:16 +0000   Fri, 20 Sep 2024 20:49:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 20:56:16 +0000   Fri, 20 Sep 2024 20:49:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 20:56:16 +0000   Fri, 20 Sep 2024 20:49:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 20:56:16 +0000   Fri, 20 Sep 2024 20:49:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    a3d12c8f-1aea-485c-8ba4-0a0207c8ac9f
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  default                     cloud-spanner-emulator-769b77f747-ndkcz      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-lx8nd                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-6krl6                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-57rnw                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-mk5k4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-75wt4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-r8fg4              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-9ml89         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-ddrnq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-kgmwp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-dw98n      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-q69lb               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x7 over 11m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff a6 55 32 a5 08 51 08 06
	[  +0.022778] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa b2 ab 4a 4b e8 08 06
	[  +2.648895] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e2 b9 67 90 ab 9e 08 06
	[  +1.679092] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca 82 24 d1 8f 70 08 06
	[  +2.156338] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 62 45 0d 8e 8a 9e 08 06
	[  +4.496594] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff d6 d5 2b eb 38 91 08 06
	[  +0.036420] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c2 e1 43 ce 7b e1 08 06
	[  +0.052441] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da 7d 01 b9 ba c7 08 06
	[  +0.954869] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 8e 80 69 78 34 ff 08 06
	[Sep20 20:50] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 ce a3 6a 0f 8f 08 06
	[  +0.016295] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 8e 2f 2a 61 d1 af 08 06
	[ +11.103546] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 25 a7 0b af 85 08 06
	[  +0.000485] IPv4: martian source 10.244.0.23 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 42 46 19 e2 39 77 08 06
	
	
	==> etcd [b48b9c8f5139] <==
	{"level":"info","ts":"2024-09-20T20:49:03.808957Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgPreVoteResp from 6b435b960bec7c3c at term 1"}
	{"level":"info","ts":"2024-09-20T20:49:03.808969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became candidate at term 2"}
	{"level":"info","ts":"2024-09-20T20:49:03.808974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-20T20:49:03.808983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 2"}
	{"level":"info","ts":"2024-09-20T20:49:03.808990Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-20T20:49:03.809841Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T20:49:03.810410Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T20:49:03.810412Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T20:49:03.810432Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T20:49:03.810656Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T20:49:03.810679Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T20:49:03.810703Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T20:49:03.810762Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T20:49:03.810788Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T20:49:03.811439Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T20:49:03.811529Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T20:49:03.812234Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-20T20:49:03.812288Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T20:49:19.946178Z","caller":"traceutil/trace.go:171","msg":"trace[2050446239] transaction","detail":"{read_only:false; response_revision:858; number_of_response:1; }","duration":"101.190726ms","start":"2024-09-20T20:49:19.844972Z","end":"2024-09-20T20:49:19.946163Z","steps":["trace[2050446239] 'process raft request'  (duration: 101.12898ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T20:49:19.946196Z","caller":"traceutil/trace.go:171","msg":"trace[1027106398] transaction","detail":"{read_only:false; response_revision:857; number_of_response:1; }","duration":"101.224332ms","start":"2024-09-20T20:49:19.844953Z","end":"2024-09-20T20:49:19.946177Z","steps":["trace[1027106398] 'process raft request'  (duration: 99.459793ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:49:19.946446Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.281807ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gcp-auth\" ","response":"range_response_count:1 size:716"}
	{"level":"info","ts":"2024-09-20T20:49:19.946521Z","caller":"traceutil/trace.go:171","msg":"trace[816460777] range","detail":"{range_begin:/registry/namespaces/gcp-auth; range_end:; response_count:1; response_revision:858; }","duration":"100.374802ms","start":"2024-09-20T20:49:19.846136Z","end":"2024-09-20T20:49:19.946511Z","steps":["trace[816460777] 'agreement among raft nodes before linearized reading'  (duration: 100.096101ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T20:59:03.828462Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1722}
	{"level":"info","ts":"2024-09-20T20:59:03.851434Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1722,"took":"22.501092ms","hash":2222568896,"current-db-size-bytes":8273920,"current-db-size":"8.3 MB","current-db-size-in-use-bytes":4395008,"current-db-size-in-use":"4.4 MB"}
	{"level":"info","ts":"2024-09-20T20:59:03.851474Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2222568896,"revision":1722,"compact-revision":-1}
	
	
	==> gcp-auth [b354a4ea6d70] <==
	2024/09/20 20:50:35 GCP Auth Webhook started!
	2024/09/20 20:50:52 Ready to marshal response ...
	2024/09/20 20:50:52 Ready to write response ...
	2024/09/20 20:50:52 Ready to marshal response ...
	2024/09/20 20:50:52 Ready to write response ...
	2024/09/20 20:51:15 Ready to marshal response ...
	2024/09/20 20:51:15 Ready to write response ...
	2024/09/20 20:51:15 Ready to marshal response ...
	2024/09/20 20:51:15 Ready to write response ...
	2024/09/20 20:51:15 Ready to marshal response ...
	2024/09/20 20:51:15 Ready to write response ...
	2024/09/20 20:59:28 Ready to marshal response ...
	2024/09/20 20:59:28 Ready to write response ...
	
	
	==> kernel <==
	 21:00:29 up 42 min,  0 users,  load average: 0.17, 0.25, 0.25
	Linux ubuntu-20-agent-2 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [47bfaead8773] <==
	W0920 20:49:54.476588       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.66.30:443: connect: connection refused
	W0920 20:50:01.035497       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.12.193:443: connect: connection refused
	E0920 20:50:01.035532       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.12.193:443: connect: connection refused" logger="UnhandledError"
	W0920 20:50:23.048284       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.12.193:443: connect: connection refused
	E0920 20:50:23.048321       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.12.193:443: connect: connection refused" logger="UnhandledError"
	W0920 20:50:23.071754       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.12.193:443: connect: connection refused
	E0920 20:50:23.071853       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.12.193:443: connect: connection refused" logger="UnhandledError"
	I0920 20:50:52.380693       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0920 20:50:52.400608       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0920 20:51:05.767919       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0920 20:51:05.776941       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0920 20:51:05.897090       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0920 20:51:05.897389       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0920 20:51:05.902414       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0920 20:51:05.938660       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0920 20:51:06.059330       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0920 20:51:06.066890       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0920 20:51:06.087978       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0920 20:51:06.910036       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0920 20:51:06.930027       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0920 20:51:06.938910       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0920 20:51:07.088659       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0920 20:51:07.167730       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0920 20:51:07.168338       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0920 20:51:07.285727       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	
	
	==> kube-controller-manager [221d32dd9bb7] <==
	W0920 20:59:03.105947       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 20:59:03.105994       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 20:59:29.251858       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 20:59:29.251910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 20:59:35.347173       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 20:59:35.347218       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 20:59:41.648924       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 20:59:41.648964       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 20:59:41.938232       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 20:59:41.938274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 20:59:45.940463       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 20:59:45.940503       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 20:59:47.344577       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 20:59:47.344634       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:00:02.599536       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:00:02.599577       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:00:12.801729       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:00:12.801773       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:00:13.514435       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:00:13.514473       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:00:14.561226       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:00:14.561267       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:00:22.071244       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:00:22.071289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 21:00:28.498540       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="10.06µs"
	
	
	==> kube-proxy [6f9a1d6bb9e4] <==
	I0920 20:49:13.211922       1 server_linux.go:66] "Using iptables proxy"
	I0920 20:49:13.443426       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0920 20:49:13.443503       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 20:49:13.521691       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0920 20:49:13.521751       1 server_linux.go:169] "Using iptables Proxier"
	I0920 20:49:13.528168       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 20:49:13.528641       1 server.go:483] "Version info" version="v1.31.1"
	I0920 20:49:13.528666       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 20:49:13.531946       1 config.go:199] "Starting service config controller"
	I0920 20:49:13.531990       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 20:49:13.532031       1 config.go:328] "Starting node config controller"
	I0920 20:49:13.532037       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 20:49:13.532253       1 config.go:105] "Starting endpoint slice config controller"
	I0920 20:49:13.532266       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 20:49:13.633762       1 shared_informer.go:320] Caches are synced for node config
	I0920 20:49:13.633817       1 shared_informer.go:320] Caches are synced for service config
	I0920 20:49:13.633880       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [cbd580b353bf] <==
	W0920 20:49:04.684211       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 20:49:04.684228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 20:49:04.684260       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 20:49:04.684295       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 20:49:04.684359       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 20:49:04.684395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 20:49:05.506591       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 20:49:05.506634       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 20:49:05.538157       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 20:49:05.538203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 20:49:05.569903       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 20:49:05.569944       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 20:49:05.583664       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 20:49:05.583701       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 20:49:05.640404       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 20:49:05.640450       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 20:49:05.644866       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 20:49:05.644912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 20:49:05.676328       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 20:49:05.676366       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 20:49:05.685814       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 20:49:05.685856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 20:49:05.947602       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 20:49:05.947648       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0920 20:49:08.981339       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Sun 2024-08-11 20:18:06 UTC, end at Fri 2024-09-20 21:00:29 UTC. --
	Sep 20 21:00:13 ubuntu-20-agent-2 kubelet[21629]: E0920 21:00:13.372381   21629 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="cfb5121c-69b0-4596-89e8-15c4b5558d53"
	Sep 20 21:00:18 ubuntu-20-agent-2 kubelet[21629]: E0920 21:00:18.372873   21629 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="17c09791-25f7-43f8-a4f1-1fdd0ce296b2"
	Sep 20 21:00:19 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:19.370982   21629 scope.go:117] "RemoveContainer" containerID="c9953521c9f2f80ce46f7971e85bc1a98d4eb3d6048c565b569c5e8d1e1b8798"
	Sep 20 21:00:19 ubuntu-20-agent-2 kubelet[21629]: E0920 21:00:19.371183   21629 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-lx8nd_gadget(b0b1fb0a-be7e-4e5a-80cd-fe281bc1a0b0)\"" pod="gadget/gadget-lx8nd" podUID="b0b1fb0a-be7e-4e5a-80cd-fe281bc1a0b0"
	Sep 20 21:00:25 ubuntu-20-agent-2 kubelet[21629]: E0920 21:00:25.373293   21629 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="cfb5121c-69b0-4596-89e8-15c4b5558d53"
	Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.471808   21629 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/17c09791-25f7-43f8-a4f1-1fdd0ce296b2-gcp-creds\") pod \"17c09791-25f7-43f8-a4f1-1fdd0ce296b2\" (UID: \"17c09791-25f7-43f8-a4f1-1fdd0ce296b2\") "
	Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.471873   21629 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvds5\" (UniqueName: \"kubernetes.io/projected/17c09791-25f7-43f8-a4f1-1fdd0ce296b2-kube-api-access-rvds5\") pod \"17c09791-25f7-43f8-a4f1-1fdd0ce296b2\" (UID: \"17c09791-25f7-43f8-a4f1-1fdd0ce296b2\") "
	Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.471950   21629 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/17c09791-25f7-43f8-a4f1-1fdd0ce296b2-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "17c09791-25f7-43f8-a4f1-1fdd0ce296b2" (UID: "17c09791-25f7-43f8-a4f1-1fdd0ce296b2"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.473980   21629 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17c09791-25f7-43f8-a4f1-1fdd0ce296b2-kube-api-access-rvds5" (OuterVolumeSpecName: "kube-api-access-rvds5") pod "17c09791-25f7-43f8-a4f1-1fdd0ce296b2" (UID: "17c09791-25f7-43f8-a4f1-1fdd0ce296b2"). InnerVolumeSpecName "kube-api-access-rvds5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.572726   21629 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/17c09791-25f7-43f8-a4f1-1fdd0ce296b2-gcp-creds\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.572759   21629 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rvds5\" (UniqueName: \"kubernetes.io/projected/17c09791-25f7-43f8-a4f1-1fdd0ce296b2-kube-api-access-rvds5\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.816524   21629 scope.go:117] "RemoveContainer" containerID="d1396593cca733b6117d9ab7c080b88d501b9bd6f43afc8c16f73e10c030a92f"
	Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.833122   21629 scope.go:117] "RemoveContainer" containerID="0b9550198c4f96b96b5dc2c116f1639f353a52dde65329f370ee6034a34578d8"
	Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.851320   21629 scope.go:117] "RemoveContainer" containerID="0b9550198c4f96b96b5dc2c116f1639f353a52dde65329f370ee6034a34578d8"
	Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: E0920 21:00:28.852126   21629 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 0b9550198c4f96b96b5dc2c116f1639f353a52dde65329f370ee6034a34578d8" containerID="0b9550198c4f96b96b5dc2c116f1639f353a52dde65329f370ee6034a34578d8"
	Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.852159   21629 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"0b9550198c4f96b96b5dc2c116f1639f353a52dde65329f370ee6034a34578d8"} err="failed to get container status \"0b9550198c4f96b96b5dc2c116f1639f353a52dde65329f370ee6034a34578d8\": rpc error: code = Unknown desc = Error response from daemon: No such container: 0b9550198c4f96b96b5dc2c116f1639f353a52dde65329f370ee6034a34578d8"
	Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.874458   21629 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4bth\" (UniqueName: \"kubernetes.io/projected/94a85633-fa9f-4487-8730-3b82acd43c17-kube-api-access-z4bth\") pod \"94a85633-fa9f-4487-8730-3b82acd43c17\" (UID: \"94a85633-fa9f-4487-8730-3b82acd43c17\") "
	Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.876154   21629 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a85633-fa9f-4487-8730-3b82acd43c17-kube-api-access-z4bth" (OuterVolumeSpecName: "kube-api-access-z4bth") pod "94a85633-fa9f-4487-8730-3b82acd43c17" (UID: "94a85633-fa9f-4487-8730-3b82acd43c17"). InnerVolumeSpecName "kube-api-access-z4bth". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.975544   21629 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zgnl\" (UniqueName: \"kubernetes.io/projected/260577bf-b43b-4e23-97b2-02d10adfa092-kube-api-access-2zgnl\") pod \"260577bf-b43b-4e23-97b2-02d10adfa092\" (UID: \"260577bf-b43b-4e23-97b2-02d10adfa092\") "
	Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.975730   21629 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-z4bth\" (UniqueName: \"kubernetes.io/projected/94a85633-fa9f-4487-8730-3b82acd43c17-kube-api-access-z4bth\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 20 21:00:28 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:28.977413   21629 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/260577bf-b43b-4e23-97b2-02d10adfa092-kube-api-access-2zgnl" (OuterVolumeSpecName: "kube-api-access-2zgnl") pod "260577bf-b43b-4e23-97b2-02d10adfa092" (UID: "260577bf-b43b-4e23-97b2-02d10adfa092"). InnerVolumeSpecName "kube-api-access-2zgnl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 21:00:29 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:29.076656   21629 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2zgnl\" (UniqueName: \"kubernetes.io/projected/260577bf-b43b-4e23-97b2-02d10adfa092-kube-api-access-2zgnl\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 20 21:00:29 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:29.382084   21629 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17c09791-25f7-43f8-a4f1-1fdd0ce296b2" path="/var/lib/kubelet/pods/17c09791-25f7-43f8-a4f1-1fdd0ce296b2/volumes"
	Sep 20 21:00:29 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:29.382313   21629 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="260577bf-b43b-4e23-97b2-02d10adfa092" path="/var/lib/kubelet/pods/260577bf-b43b-4e23-97b2-02d10adfa092/volumes"
	Sep 20 21:00:29 ubuntu-20-agent-2 kubelet[21629]: I0920 21:00:29.382627   21629 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a85633-fa9f-4487-8730-3b82acd43c17" path="/var/lib/kubelet/pods/94a85633-fa9f-4487-8730-3b82acd43c17/volumes"
	
	
	==> storage-provisioner [de5fb063acdb] <==
	I0920 20:49:14.714203       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 20:49:14.736172       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 20:49:14.736224       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 20:49:14.753727       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 20:49:14.755157       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_f8672bce-c4cb-421a-8a72-fd4d339910ad!
	I0920 20:49:14.757300       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cbeb251e-08da-4635-9e98-8038c240d12c", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_f8672bce-c4cb-421a-8a72-fd4d339910ad became leader
	I0920 20:49:14.856613       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_f8672bce-c4cb-421a-8a72-fd4d339910ad!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ubuntu-20-agent-2/10.138.0.48
	Start Time:       Fri, 20 Sep 2024 20:51:15 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kggpp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kggpp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m14s                   default-scheduler  Successfully assigned default/busybox to ubuntu-20-agent-2
	  Normal   Pulling    7m43s (x4 over 9m13s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m43s (x4 over 9m13s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m43s (x4 over 9m13s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m30s (x6 over 9m13s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m58s (x20 over 9m13s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.85s)

                                                
                                    

Test pass (110/167)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 1.53
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.1
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 0.93
15 TestDownloadOnly/v1.31.1/binaries 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.05
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.54
22 TestOffline 69.37
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.04
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.04
27 TestAddons/Setup 101.41
29 TestAddons/serial/Volcano 39.57
31 TestAddons/serial/GCPAuth/Namespaces 0.11
35 TestAddons/parallel/InspektorGadget 10.44
36 TestAddons/parallel/MetricsServer 5.36
38 TestAddons/parallel/CSI 58.16
39 TestAddons/parallel/Headlamp 15.94
40 TestAddons/parallel/CloudSpanner 5.25
42 TestAddons/parallel/NvidiaDevicePlugin 5.22
43 TestAddons/parallel/Yakd 10.4
44 TestAddons/StoppedEnableDisable 10.68
46 TestCertExpiration 226.11
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 26.51
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 29.88
61 TestFunctional/serial/KubeContext 0.04
62 TestFunctional/serial/KubectlGetPods 0.06
64 TestFunctional/serial/MinikubeKubectlCmd 0.1
65 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
66 TestFunctional/serial/ExtraConfig 34.13
67 TestFunctional/serial/ComponentHealth 0.07
68 TestFunctional/serial/LogsCmd 0.78
69 TestFunctional/serial/LogsFileCmd 0.82
70 TestFunctional/serial/InvalidService 4.22
72 TestFunctional/parallel/ConfigCmd 0.26
73 TestFunctional/parallel/DashboardCmd 9.37
74 TestFunctional/parallel/DryRun 0.15
75 TestFunctional/parallel/InternationalLanguage 0.08
76 TestFunctional/parallel/StatusCmd 0.41
79 TestFunctional/parallel/ProfileCmd/profile_not_create 0.2
80 TestFunctional/parallel/ProfileCmd/profile_list 0.19
81 TestFunctional/parallel/ProfileCmd/profile_json_output 0.19
83 TestFunctional/parallel/ServiceCmd/DeployApp 10.14
84 TestFunctional/parallel/ServiceCmd/List 0.33
85 TestFunctional/parallel/ServiceCmd/JSONOutput 0.32
86 TestFunctional/parallel/ServiceCmd/HTTPS 0.14
87 TestFunctional/parallel/ServiceCmd/Format 0.14
88 TestFunctional/parallel/ServiceCmd/URL 0.14
89 TestFunctional/parallel/ServiceCmdConnect 7.28
90 TestFunctional/parallel/AddonsCmd 0.1
91 TestFunctional/parallel/PersistentVolumeClaim 21.45
94 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.26
95 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
97 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.18
98 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
99 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
103 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
106 TestFunctional/parallel/MySQL 20.99
110 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
111 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 14.63
112 TestFunctional/parallel/UpdateContextCmd/no_clusters 14.93
115 TestFunctional/parallel/NodeLabels 0.06
119 TestFunctional/parallel/Version/short 0.04
120 TestFunctional/parallel/Version/components 0.39
121 TestFunctional/parallel/License 0.21
122 TestFunctional/delete_echo-server_images 0.03
123 TestFunctional/delete_my-image_image 0.01
124 TestFunctional/delete_minikube_cached_images 0.02
129 TestImageBuild/serial/Setup 14.74
130 TestImageBuild/serial/NormalBuild 1.5
131 TestImageBuild/serial/BuildWithBuildArg 0.81
132 TestImageBuild/serial/BuildWithDockerIgnore 0.57
133 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.61
137 TestJSONOutput/start/Command 27.37
138 TestJSONOutput/start/Audit 0
140 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
141 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
143 TestJSONOutput/pause/Command 0.52
144 TestJSONOutput/pause/Audit 0
146 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
147 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
149 TestJSONOutput/unpause/Command 0.42
150 TestJSONOutput/unpause/Audit 0
152 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
155 TestJSONOutput/stop/Command 10.39
156 TestJSONOutput/stop/Audit 0
158 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
160 TestErrorJSONOutput 0.19
165 TestMainNoArgs 0.04
166 TestMinikubeProfile 33.64
174 TestPause/serial/Start 27.18
175 TestPause/serial/SecondStartNoReconfiguration 33.6
176 TestPause/serial/Pause 0.48
177 TestPause/serial/VerifyStatus 0.13
178 TestPause/serial/Unpause 0.39
179 TestPause/serial/PauseAgain 0.54
180 TestPause/serial/DeletePaused 1.76
181 TestPause/serial/VerifyDeletedResources 0.06
195 TestRunningBinaryUpgrade 64.84
197 TestStoppedBinaryUpgrade/Setup 0.43
198 TestStoppedBinaryUpgrade/Upgrade 50.13
199 TestStoppedBinaryUpgrade/MinikubeLogs 0.76
200 TestKubernetesUpgrade 307.4
x
+
TestDownloadOnly/v1.20.0/json-events (1.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (1.533050895s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (1.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (53.682668ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 20:47:41
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 20:47:41.496652   16391 out.go:345] Setting OutFile to fd 1 ...
	I0920 20:47:41.496891   16391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:47:41.496899   16391 out.go:358] Setting ErrFile to fd 2...
	I0920 20:47:41.496904   16391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:47:41.497064   16391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9477/.minikube/bin
	W0920 20:47:41.497195   16391 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19672-9477/.minikube/config/config.json: open /home/jenkins/minikube-integration/19672-9477/.minikube/config/config.json: no such file or directory
	I0920 20:47:41.497769   16391 out.go:352] Setting JSON to true
	I0920 20:47:41.498692   16391 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1806,"bootTime":1726863455,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 20:47:41.498782   16391 start.go:139] virtualization: kvm guest
	I0920 20:47:41.501053   16391 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 20:47:41.501211   16391 notify.go:220] Checking for updates...
	W0920 20:47:41.501231   16391 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19672-9477/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 20:47:41.502441   16391 out.go:169] MINIKUBE_LOCATION=19672
	I0920 20:47:41.503923   16391 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 20:47:41.505183   16391 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19672-9477/kubeconfig
	I0920 20:47:41.506481   16391 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9477/.minikube
	I0920 20:47:41.507801   16391 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (0.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm
--- PASS: TestDownloadOnly/v1.31.1/json-events (0.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
--- PASS: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (52.7691ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 20:47:43
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 20:47:43.304455   16544 out.go:345] Setting OutFile to fd 1 ...
	I0920 20:47:43.304580   16544 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:47:43.304590   16544 out.go:358] Setting ErrFile to fd 2...
	I0920 20:47:43.304596   16544 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:47:43.304788   16544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9477/.minikube/bin
	I0920 20:47:43.305316   16544 out.go:352] Setting JSON to true
	I0920 20:47:43.306132   16544 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":1808,"bootTime":1726863455,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 20:47:43.306212   16544 start.go:139] virtualization: kvm guest
	I0920 20:47:43.308249   16544 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0920 20:47:43.308354   16544 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19672-9477/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 20:47:43.308410   16544 notify.go:220] Checking for updates...
	I0920 20:47:43.309548   16544 out.go:169] MINIKUBE_LOCATION=19672
	I0920 20:47:43.310775   16544 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 20:47:43.312030   16544 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19672-9477/kubeconfig
	I0920 20:47:43.313314   16544 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9477/.minikube
	I0920 20:47:43.314557   16544 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
I0920 20:47:44.701386   16380 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:37117 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (69.37s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (1m7.872600421s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.496978102s)
--- PASS: TestOffline (69.37s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (43.763859ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (43.56208ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/Setup (101.41s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm: (1m41.406322412s)
--- PASS: TestAddons/Setup (101.41s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.57s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:835: volcano-scheduler stabilized in 9.044414ms
addons_test.go:843: volcano-admission stabilized in 9.122981ms
addons_test.go:851: volcano-controller stabilized in 9.162305ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-dwp7m" [a575638b-1516-4c0d-86a3-ac0300c0bd05] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003779509s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-pp7lm" [7bc971b9-822c-4ac3-bea7-57b969d75d63] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003771269s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-b2kxp" [11205045-126b-4137-ac57-aeef9a7a8861] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003635535s
addons_test.go:870: (dbg) Run:  kubectl --context minikube delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context minikube create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [5ef176f4-4f56-4d11-8d15-37e70c4a4d84] Pending
helpers_test.go:344: "test-job-nginx-0" [5ef176f4-4f56-4d11-8d15-37e70c4a4d84] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [5ef176f4-4f56-4d11-8d15-37e70c4a4d84] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.00399538s
addons_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.221090899s)
--- PASS: TestAddons/serial/Volcano (39.57s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.44s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-lx8nd" [b0b1fb0a-be7e-4e5a-80cd-fe281bc1a0b0] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003910987s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube: (5.431868626s)
--- PASS: TestAddons/parallel/InspektorGadget (10.44s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.36s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 1.954088ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-r8fg4" [c1ae637f-e27e-48fe-96fb-249357137ba1] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003506324s
addons_test.go:413: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.36s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.16s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
I0920 21:00:45.620964   16380 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 21:00:45.624675   16380 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 21:00:45.624697   16380 kapi.go:107] duration metric: took 3.751073ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 3.760434ms
addons_test.go:508: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a57da52d-9c95-4c7f-867c-2087c59ac25b] Pending
helpers_test.go:344: "task-pv-pod" [a57da52d-9c95-4c7f-867c-2087c59ac25b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a57da52d-9c95-4c7f-867c-2087c59ac25b] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003927598s
addons_test.go:528: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context minikube delete pod task-pv-pod: (1.30477884s)
addons_test.go:544: (dbg) Run:  kubectl --context minikube delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c70366a2-f605-44a6-8a69-30eccdf9b72e] Pending
helpers_test.go:344: "task-pv-pod-restore" [c70366a2-f605-44a6-8a69-30eccdf9b72e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c70366a2-f605-44a6-8a69-30eccdf9b72e] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003768465s
addons_test.go:570: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context minikube delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context minikube delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.286822431s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (58.16s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-dxnl9" [20565e24-3d05-4573-9df2-af1692b8ef88] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-dxnl9" [20565e24-3d05-4573-9df2-af1692b8ef88] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003569694s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1: (5.47896498s)
--- PASS: TestAddons/parallel/Headlamp (15.94s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-ndkcz" [a6bf5d1f-4ffe-416d-afe8-853c38a9151e] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003462165s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.22s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-9ml89" [3c92ac5c-2c50-4c61-ab43-ddb84a8f39c1] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003792277s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p minikube
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.22s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-q69lb" [67ed410d-0d5c-4627-aae5-402f2ef3e9de] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003406552s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.399652782s)
--- PASS: TestAddons/parallel/Yakd (10.40s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.68s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.385630303s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.68s)

                                                
                                    
x
+
TestCertExpiration (226.11s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (13.587559696s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (30.723969759s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.789564011s)
--- PASS: TestCertExpiration (226.11s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19672-9477/.minikube/files/etc/test/nested/copy/16380/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (26.51s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (26.509258408s)
--- PASS: TestFunctional/serial/StartWithProxy (26.51s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.88s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0920 21:06:45.064052   16380 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (29.875391052s)
functional_test.go:663: soft start took 29.876097191s for "minikube" cluster.
I0920 21:07:14.939747   16380 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (29.88s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.13s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.13397856s)
functional_test.go:761: restart took 34.134099049s for "minikube" cluster.
I0920 21:07:49.381869   16380 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (34.13s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.78s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd821586780/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.82s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.22s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (150.906226ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://10.138.0.48:31810 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.22s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (39.489491ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (41.517065ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2024/09/20 21:08:04 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 50958: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.37s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (79.029041ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-9477/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9477/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 21:08:04.933985   51350 out.go:345] Setting OutFile to fd 1 ...
	I0920 21:08:04.934110   51350 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:08:04.934120   51350 out.go:358] Setting ErrFile to fd 2...
	I0920 21:08:04.934126   51350 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:08:04.934297   51350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9477/.minikube/bin
	I0920 21:08:04.934805   51350 out.go:352] Setting JSON to false
	I0920 21:08:04.935756   51350 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3030,"bootTime":1726863455,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 21:08:04.935839   51350 start.go:139] virtualization: kvm guest
	I0920 21:08:04.938069   51350 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0920 21:08:04.939386   51350 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19672-9477/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 21:08:04.939429   51350 notify.go:220] Checking for updates...
	I0920 21:08:04.939442   51350 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 21:08:04.940855   51350 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 21:08:04.942763   51350 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9477/kubeconfig
	I0920 21:08:04.944259   51350 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9477/.minikube
	I0920 21:08:04.945808   51350 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 21:08:04.947237   51350 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 21:08:04.948956   51350 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 21:08:04.949245   51350 exec_runner.go:51] Run: systemctl --version
	I0920 21:08:04.952007   51350 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 21:08:04.964158   51350 out.go:177] * Using the none driver based on existing profile
	I0920 21:08:04.965474   51350 start.go:297] selected driver: none
	I0920 21:08:04.965488   51350 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:08:04.965606   51350 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 21:08:04.965626   51350 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0920 21:08:04.965952   51350 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0920 21:08:04.968324   51350 out.go:201] 
	W0920 21:08:04.969617   51350 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 21:08:04.970785   51350 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (82.15469ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-9477/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9477/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 21:08:05.091935   51380 out.go:345] Setting OutFile to fd 1 ...
	I0920 21:08:05.092051   51380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:08:05.092061   51380 out.go:358] Setting ErrFile to fd 2...
	I0920 21:08:05.092067   51380 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:08:05.092321   51380 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9477/.minikube/bin
	I0920 21:08:05.092861   51380 out.go:352] Setting JSON to false
	I0920 21:08:05.093822   51380 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3030,"bootTime":1726863455,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 21:08:05.093917   51380 start.go:139] virtualization: kvm guest
	I0920 21:08:05.096039   51380 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	W0920 21:08:05.097447   51380 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19672-9477/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 21:08:05.097499   51380 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 21:08:05.097557   51380 notify.go:220] Checking for updates...
	I0920 21:08:05.100067   51380 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 21:08:05.101674   51380 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9477/kubeconfig
	I0920 21:08:05.103122   51380 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9477/.minikube
	I0920 21:08:05.104479   51380 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 21:08:05.105870   51380 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 21:08:05.107494   51380 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 21:08:05.107780   51380 exec_runner.go:51] Run: systemctl --version
	I0920 21:08:05.110465   51380 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 21:08:05.122684   51380 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0920 21:08:05.123956   51380 start.go:297] selected driver: none
	I0920 21:08:05.123970   51380 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:08:05.124086   51380 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 21:08:05.124117   51380 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0920 21:08:05.124444   51380 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0920 21:08:05.126633   51380 out.go:201] 
	W0920 21:08:05.127810   51380 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 21:08:05.128991   51380 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "150.141163ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "41.58973ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "149.669308ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "41.888639ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-rkb72" [bc1053b0-3808-4db5-a3b2-e5b40fc63090] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-rkb72" [bc1053b0-3808-4db5-a3b2-e5b40fc63090] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003774453s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1494: Took "324.248454ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://10.138.0.48:30524
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://10.138.0.48:30524
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-z5prt" [5a3ce04f-3815-45f9-a12c-adde2632c489] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-z5prt" [5a3ce04f-3815-45f9-a12c-adde2632c489] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003524666s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://10.138.0.48:30450
functional_test.go:1675: http://10.138.0.48:30450: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-z5prt

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.138.0.48:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.138.0.48:30450
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.28s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (21.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6eb777e0-3c0e-4cd0-a02b-b633dcccf4fe] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00347665s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [374a3c00-6728-4d12-a530-71a0bb35aad1] Pending
helpers_test.go:344: "sp-pod" [374a3c00-6728-4d12-a530-71a0bb35aad1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [374a3c00-6728-4d12-a530-71a0bb35aad1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003320767s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [26283d6b-f61c-4b40-be9f-ab124cd3dd4d] Pending
helpers_test.go:344: "sp-pod" [26283d6b-f61c-4b40-be9f-ab124cd3dd4d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [26283d6b-f61c-4b40-be9f-ab124cd3dd4d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003841037s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (21.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 53071: operation not permitted
helpers_test.go:508: unable to kill pid 53023: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context minikube apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [ceaeab12-c99e-4852-981b-101bebb2cd3d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [ceaeab12-c99e-4852-981b-101bebb2cd3d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003600535s
I0920 21:08:56.640145   16380 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context minikube get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.236.143 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-k4nrv" [2260626c-63fd-4162-aaf8-b55858f6ccf1] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-k4nrv" [2260626c-63fd-4162-aaf8-b55858f6ccf1] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.003911691s
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-k4nrv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-k4nrv -- mysql -ppassword -e "show databases;": exit status 1 (126.291876ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 21:09:13.144154   16380 retry.go:31] will retry after 705.334011ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-k4nrv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-k4nrv -- mysql -ppassword -e "show databases;": exit status 1 (112.785465ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 21:09:13.962962   16380 retry.go:31] will retry after 2.185111419s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-k4nrv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-k4nrv -- mysql -ppassword -e "show databases;": exit status 1 (106.923516ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 21:09:16.255959   16380 retry.go:31] will retry after 1.479270597s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-k4nrv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.99s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (14.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (14.626356792s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (14.63s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (14.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (14.930611922s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (14.93s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (14.74s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.740846139s)
--- PASS: TestImageBuild/serial/Setup (14.74s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.5s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube: (1.500920189s)
--- PASS: TestImageBuild/serial/NormalBuild (1.50s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.81s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.57s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.57s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.61s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.61s)

                                                
                                    
x
+
TestJSONOutput/start/Command (27.37s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (27.367033282s)
--- PASS: TestJSONOutput/start/Command (27.37s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.52s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.52s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.42s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.42s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.39s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (10.392433044s)
--- PASS: TestJSONOutput/stop/Command (10.39s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.067746ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fab5b18c-37f5-498a-b6ad-01051acfdbda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2d88a17a-ed62-4c17-8cbf-cd26cc60f76a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19672"}}
	{"specversion":"1.0","id":"6d33aa43-0979-4e7d-9c9a-b7019122b6c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"94f45f9f-9135-4f43-8ea0-c16a288e3e2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19672-9477/kubeconfig"}}
	{"specversion":"1.0","id":"6526bfce-877d-4ab0-b625-a70460a3efaa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9477/.minikube"}}
	{"specversion":"1.0","id":"1ab9c17b-239f-495a-9d4f-6f33b31065eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"05923ac4-ef40-4dbe-9f0b-ebc33a8874e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fbb8f2c5-aa39-460a-8da9-5b34cc39b8fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (33.64s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.348293331s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (17.485803399s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.254098175s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (33.64s)

                                                
                                    
x
+
TestPause/serial/Start (27.18s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (27.181391329s)
--- PASS: TestPause/serial/Start (27.18s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (33.6s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (33.599800735s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (33.60s)

                                                
                                    
x
+
TestPause/serial/Pause (0.48s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.48s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.13s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (126.913916ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.13s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.39s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.39s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.54s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.54s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.76s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.763557903s)
--- PASS: TestPause/serial/DeletePaused (1.76s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (64.84s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2903915502 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2903915502 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (27.369795736s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (34.081067309s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (2.860931245s)
--- PASS: TestRunningBinaryUpgrade (64.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (50.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2971620518 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2971620518 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (14.852964231s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2971620518 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2971620518 -p minikube stop: (23.638086808s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (11.636228408s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (50.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.76s)

                                                
                                    
x
+
TestKubernetesUpgrade (307.4s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (27.755588108s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (1.297779049s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (71.701105ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m18.767653826s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (64.390375ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-9477/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9477/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (18.120694523s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.268231265s)
--- PASS: TestKubernetesUpgrade (307.40s)

                                                
                                    

Test skip (56/167)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.31.1/preload-exists 0
14 TestDownloadOnly/v1.31.1/cached-images 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Ingress 0
37 TestAddons/parallel/Olm 0
41 TestAddons/parallel/LocalPath 0
45 TestCertOptions 0
47 TestDockerFlags 0
48 TestForceSystemdFlag 0
49 TestForceSystemdEnv 0
50 TestDockerEnvContainerd 0
51 TestKVMDriverInstallOrUpdate 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
54 TestErrorSpam 0
63 TestFunctional/serial/CacheCmd 0
77 TestFunctional/parallel/MountCmd 0
100 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
101 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
102 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
104 TestFunctional/parallel/SSHCmd 0
105 TestFunctional/parallel/CpCmd 0
107 TestFunctional/parallel/FileSync 0
108 TestFunctional/parallel/CertSync 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/ImageCommands 0
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0
125 TestGvisorAddon 0
126 TestMultiControlPlane 0
134 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
161 TestKicCustomNetwork 0
162 TestKicExistingNetwork 0
163 TestKicCustomSubnet 0
164 TestKicStaticIP 0
167 TestMountStart 0
168 TestMultiNode 0
169 TestNetworkPlugins 0
170 TestNoKubernetes 0
171 TestChangeNoneUser 0
182 TestPreload 0
183 TestScheduledStopWindows 0
184 TestScheduledStopUnix 0
185 TestSkaffold 0
188 TestStartStop/group/old-k8s-version 0.13
189 TestStartStop/group/newest-cni 0.13
190 TestStartStop/group/default-k8s-diff-port 0.13
191 TestStartStop/group/no-preload 0.12
192 TestStartStop/group/disable-driver-mounts 0.13
193 TestStartStop/group/embed-certs 0.13
194 TestInsufficientStorage 0
201 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:194: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:916: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1041: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1717: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1760: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1924: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1955: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:458: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:545: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:292: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2016: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.12s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.13s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard