Test Report: none_Linux 19700

                    
                      8b226b9d2c09f79dcc3a887682b5a6bd27a95904:2024-09-24:36357
                    
                

Test fail (1/166)

Order failed test Duration
33 TestAddons/parallel/Registry 71.98
x
+
TestAddons/parallel/Registry (71.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.731465ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-hngnq" [12f00669-2ddf-46ee-94c2-081f0f063e2f] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003568168s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qvdm8" [9f8ff49d-1599-4142-892a-bb601f73001a] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003783748s
addons_test.go:338: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.08883939s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/09/24 18:32:03 [DEBUG] GET http://10.128.15.240:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 24 Sep 24 18:19 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 24 Sep 24 18:19 UTC | 24 Sep 24 18:19 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 24 Sep 24 18:19 UTC | 24 Sep 24 18:19 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 24 Sep 24 18:19 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 24 Sep 24 18:19 UTC | 24 Sep 24 18:19 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 24 Sep 24 18:19 UTC | 24 Sep 24 18:19 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 24 Sep 24 18:19 UTC | 24 Sep 24 18:19 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 24 Sep 24 18:19 UTC | 24 Sep 24 18:19 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 24 Sep 24 18:19 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:44331               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 24 Sep 24 18:19 UTC | 24 Sep 24 18:19 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 24 Sep 24 18:19 UTC | 24 Sep 24 18:20 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:20 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 24 Sep 24 18:20 UTC | 24 Sep 24 18:22 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 24 Sep 24 18:22 UTC | 24 Sep 24 18:22 UTC |
	|         | volcano --alsologtostderr -v=1       |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 24 Sep 24 18:32 UTC | 24 Sep 24 18:32 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 24 Sep 24 18:32 UTC | 24 Sep 24 18:32 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 18:20:29
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 18:20:29.072866   14300 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:20:29.072963   14300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:20:29.072967   14300 out.go:358] Setting ErrFile to fd 2...
	I0924 18:20:29.072972   14300 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:20:29.073163   14300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3578/.minikube/bin
	I0924 18:20:29.073808   14300 out.go:352] Setting JSON to false
	I0924 18:20:29.074620   14300 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":180,"bootTime":1727201849,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 18:20:29.074716   14300 start.go:139] virtualization: kvm guest
	I0924 18:20:29.077212   14300 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0924 18:20:29.078693   14300 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19700-3578/.minikube/cache/preloaded-tarball: no such file or directory
	I0924 18:20:29.078708   14300 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 18:20:29.078737   14300 notify.go:220] Checking for updates...
	I0924 18:20:29.081624   14300 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 18:20:29.082887   14300 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3578/kubeconfig
	I0924 18:20:29.084033   14300 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3578/.minikube
	I0924 18:20:29.085222   14300 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 18:20:29.086458   14300 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 18:20:29.087946   14300 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 18:20:29.099093   14300 out.go:177] * Using the none driver based on user configuration
	I0924 18:20:29.100312   14300 start.go:297] selected driver: none
	I0924 18:20:29.100324   14300 start.go:901] validating driver "none" against <nil>
	I0924 18:20:29.100336   14300 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 18:20:29.100386   14300 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0924 18:20:29.100702   14300 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0924 18:20:29.101262   14300 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 18:20:29.101526   14300 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 18:20:29.101569   14300 cni.go:84] Creating CNI manager for ""
	I0924 18:20:29.101643   14300 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 18:20:29.101656   14300 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0924 18:20:29.101715   14300 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:20:29.103236   14300 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0924 18:20:29.104813   14300 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/config.json ...
	I0924 18:20:29.104844   14300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/config.json: {Name:mke696bfbfbdff8d078b1f7263bb96b4273fd7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:29.104992   14300 start.go:360] acquireMachinesLock for minikube: {Name:mk37bb57c5e8ef7a9274c63fe3f6c4091a1c55b0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0924 18:20:29.105028   14300 start.go:364] duration metric: took 19.925µs to acquireMachinesLock for "minikube"
	I0924 18:20:29.105046   14300 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 18:20:29.105109   14300 start.go:125] createHost starting for "" (driver="none")
	I0924 18:20:29.106676   14300 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0924 18:20:29.107768   14300 exec_runner.go:51] Run: systemctl --version
	I0924 18:20:29.110389   14300 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0924 18:20:29.110430   14300 client.go:168] LocalClient.Create starting
	I0924 18:20:29.110520   14300 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3578/.minikube/certs/ca.pem
	I0924 18:20:29.110558   14300 main.go:141] libmachine: Decoding PEM data...
	I0924 18:20:29.110577   14300 main.go:141] libmachine: Parsing certificate...
	I0924 18:20:29.110629   14300 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19700-3578/.minikube/certs/cert.pem
	I0924 18:20:29.110659   14300 main.go:141] libmachine: Decoding PEM data...
	I0924 18:20:29.110687   14300 main.go:141] libmachine: Parsing certificate...
	I0924 18:20:29.111040   14300 client.go:171] duration metric: took 601.597µs to LocalClient.Create
	I0924 18:20:29.111064   14300 start.go:167] duration metric: took 683.013µs to libmachine.API.Create "minikube"
	I0924 18:20:29.111070   14300 start.go:293] postStartSetup for "minikube" (driver="none")
	I0924 18:20:29.111122   14300 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 18:20:29.111152   14300 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 18:20:29.120864   14300 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0924 18:20:29.120890   14300 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0924 18:20:29.120899   14300 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0924 18:20:29.122972   14300 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0924 18:20:29.124355   14300 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3578/.minikube/addons for local assets ...
	I0924 18:20:29.124414   14300 filesync.go:126] Scanning /home/jenkins/minikube-integration/19700-3578/.minikube/files for local assets ...
	I0924 18:20:29.124461   14300 start.go:296] duration metric: took 13.365892ms for postStartSetup
	I0924 18:20:29.125106   14300 profile.go:143] Saving config to /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/config.json ...
	I0924 18:20:29.125269   14300 start.go:128] duration metric: took 20.148505ms to createHost
	I0924 18:20:29.125283   14300 start.go:83] releasing machines lock for "minikube", held for 20.244869ms
	I0924 18:20:29.125667   14300 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0924 18:20:29.125754   14300 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0924 18:20:29.127683   14300 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0924 18:20:29.127725   14300 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 18:20:29.137376   14300 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0924 18:20:29.137411   14300 start.go:495] detecting cgroup driver to use...
	I0924 18:20:29.137442   14300 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0924 18:20:29.137537   14300 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 18:20:29.157699   14300 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0924 18:20:29.167781   14300 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0924 18:20:29.177155   14300 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0924 18:20:29.177221   14300 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0924 18:20:29.186725   14300 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0924 18:20:29.196348   14300 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0924 18:20:29.205458   14300 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0924 18:20:29.214624   14300 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 18:20:29.223418   14300 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0924 18:20:29.231958   14300 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0924 18:20:29.240467   14300 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0924 18:20:29.249856   14300 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 18:20:29.257118   14300 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 18:20:29.265230   14300 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0924 18:20:29.472672   14300 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0924 18:20:29.535988   14300 start.go:495] detecting cgroup driver to use...
	I0924 18:20:29.536036   14300 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0924 18:20:29.536147   14300 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 18:20:29.556206   14300 exec_runner.go:51] Run: which cri-dockerd
	I0924 18:20:29.557102   14300 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0924 18:20:29.565443   14300 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0924 18:20:29.565460   14300 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0924 18:20:29.565503   14300 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0924 18:20:29.573588   14300 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0924 18:20:29.573722   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2999369454 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0924 18:20:29.582718   14300 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0924 18:20:29.785880   14300 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0924 18:20:30.012158   14300 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0924 18:20:30.012271   14300 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0924 18:20:30.012282   14300 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0924 18:20:30.012314   14300 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0924 18:20:30.022032   14300 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0924 18:20:30.022169   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube614564240 /etc/docker/daemon.json
	I0924 18:20:30.031965   14300 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0924 18:20:30.242966   14300 exec_runner.go:51] Run: sudo systemctl restart docker
	I0924 18:20:30.529229   14300 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0924 18:20:30.540646   14300 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0924 18:20:30.557681   14300 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0924 18:20:30.569567   14300 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0924 18:20:30.797149   14300 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0924 18:20:31.013238   14300 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0924 18:20:31.244448   14300 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0924 18:20:31.259341   14300 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0924 18:20:31.271190   14300 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0924 18:20:31.483871   14300 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0924 18:20:31.553085   14300 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0924 18:20:31.553158   14300 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0924 18:20:31.554678   14300 start.go:563] Will wait 60s for crictl version
	I0924 18:20:31.554735   14300 exec_runner.go:51] Run: which crictl
	I0924 18:20:31.556926   14300 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0924 18:20:31.587797   14300 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0924 18:20:31.587872   14300 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0924 18:20:31.608466   14300 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0924 18:20:31.632618   14300 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0924 18:20:31.632693   14300 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0924 18:20:31.635496   14300 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0924 18:20:31.636933   14300 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 18:20:31.637072   14300 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0924 18:20:31.637083   14300 kubeadm.go:934] updating node { 10.128.15.240 8443 v1.31.1 docker true true} ...
	I0924 18:20:31.637157   14300 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-15 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.128.15.240 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0924 18:20:31.637206   14300 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0924 18:20:31.686490   14300 cni.go:84] Creating CNI manager for ""
	I0924 18:20:31.686523   14300 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 18:20:31.686534   14300 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 18:20:31.686558   14300 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.128.15.240 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-15 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.128.15.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.128.15.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 18:20:31.686713   14300 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.128.15.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-15"
	  kubeletExtraArgs:
	    node-ip: 10.128.15.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.128.15.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 18:20:31.686783   14300 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 18:20:31.696681   14300 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0924 18:20:31.696733   14300 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0924 18:20:31.707025   14300 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0924 18:20:31.707043   14300 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0924 18:20:31.707077   14300 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0924 18:20:31.707082   14300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3578/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0924 18:20:31.707030   14300 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0924 18:20:31.707210   14300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3578/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0924 18:20:31.718971   14300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3578/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0924 18:20:31.755974   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1051312395 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0924 18:20:31.758651   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3173838269 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0924 18:20:31.784190   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube23954874 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0924 18:20:31.851432   14300 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 18:20:31.861248   14300 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0924 18:20:31.861270   14300 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0924 18:20:31.861313   14300 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0924 18:20:31.871744   14300 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I0924 18:20:31.871876   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube382702221 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0924 18:20:31.881700   14300 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0924 18:20:31.881722   14300 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0924 18:20:31.881774   14300 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0924 18:20:31.891338   14300 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 18:20:31.891487   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1487963831 /lib/systemd/system/kubelet.service
	I0924 18:20:31.899453   14300 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0924 18:20:31.899574   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1101019201 /var/tmp/minikube/kubeadm.yaml.new
	I0924 18:20:31.908145   14300 exec_runner.go:51] Run: grep 10.128.15.240	control-plane.minikube.internal$ /etc/hosts
	I0924 18:20:31.909398   14300 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0924 18:20:32.125599   14300 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0924 18:20:32.140535   14300 certs.go:68] Setting up /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube for IP: 10.128.15.240
	I0924 18:20:32.140556   14300 certs.go:194] generating shared ca certs ...
	I0924 18:20:32.140572   14300 certs.go:226] acquiring lock for ca certs: {Name:mk35314f78ff8c39728c9bb132715ac0a002ae5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:32.140698   14300 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19700-3578/.minikube/ca.key
	I0924 18:20:32.140750   14300 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19700-3578/.minikube/proxy-client-ca.key
	I0924 18:20:32.140763   14300 certs.go:256] generating profile certs ...
	I0924 18:20:32.140827   14300 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/client.key
	I0924 18:20:32.140846   14300 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/client.crt with IP's: []
	I0924 18:20:32.199901   14300 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/client.crt ...
	I0924 18:20:32.199929   14300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/client.crt: {Name:mka8ba8d330e6343df46e2e5e1b12111dad3abe4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:32.200085   14300 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/client.key ...
	I0924 18:20:32.200101   14300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/client.key: {Name:mke1afacb2cdecb16a1434ddfa7723281427502e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:32.200187   14300 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/apiserver.key.271ff23d
	I0924 18:20:32.200206   14300 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/apiserver.crt.271ff23d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.128.15.240]
	I0924 18:20:32.378414   14300 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/apiserver.crt.271ff23d ...
	I0924 18:20:32.378439   14300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/apiserver.crt.271ff23d: {Name:mk4739827e962ce9612c8714214fa71934fe0331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:32.378591   14300 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/apiserver.key.271ff23d ...
	I0924 18:20:32.378608   14300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/apiserver.key.271ff23d: {Name:mk467d3de864dfe881b98dfaa22fa2d3c1825746 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:32.378677   14300 certs.go:381] copying /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/apiserver.crt.271ff23d -> /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/apiserver.crt
	I0924 18:20:32.378805   14300 certs.go:385] copying /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/apiserver.key.271ff23d -> /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/apiserver.key
	I0924 18:20:32.378885   14300 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/proxy-client.key
	I0924 18:20:32.378902   14300 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0924 18:20:32.541587   14300 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/proxy-client.crt ...
	I0924 18:20:32.541619   14300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/proxy-client.crt: {Name:mk25905e5debc54aa403a9fbe352407261586fe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:32.541764   14300 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/proxy-client.key ...
	I0924 18:20:32.541786   14300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/proxy-client.key: {Name:mk379aa2707a36cb8823a6d3215fa8e5911a0d4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:32.541949   14300 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3578/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 18:20:32.541987   14300 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3578/.minikube/certs/ca.pem (1078 bytes)
	I0924 18:20:32.542020   14300 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3578/.minikube/certs/cert.pem (1123 bytes)
	I0924 18:20:32.542064   14300 certs.go:484] found cert: /home/jenkins/minikube-integration/19700-3578/.minikube/certs/key.pem (1679 bytes)
	I0924 18:20:32.542687   14300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3578/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 18:20:32.542894   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1360330727 /var/lib/minikube/certs/ca.crt
	I0924 18:20:32.551555   14300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3578/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0924 18:20:32.551700   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube44259712 /var/lib/minikube/certs/ca.key
	I0924 18:20:32.559756   14300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3578/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 18:20:32.559886   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3402706494 /var/lib/minikube/certs/proxy-client-ca.crt
	I0924 18:20:32.568880   14300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3578/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0924 18:20:32.569018   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube19461193 /var/lib/minikube/certs/proxy-client-ca.key
	I0924 18:20:32.576941   14300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0924 18:20:32.577058   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3882991417 /var/lib/minikube/certs/apiserver.crt
	I0924 18:20:32.585855   14300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 18:20:32.586032   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2819078828 /var/lib/minikube/certs/apiserver.key
	I0924 18:20:32.594572   14300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 18:20:32.594689   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3751543183 /var/lib/minikube/certs/proxy-client.crt
	I0924 18:20:32.603825   14300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3578/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0924 18:20:32.603964   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3539809994 /var/lib/minikube/certs/proxy-client.key
	I0924 18:20:32.611782   14300 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0924 18:20:32.611804   14300 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:20:32.611836   14300 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:20:32.619304   14300 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19700-3578/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 18:20:32.619471   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1523963420 /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:20:32.628854   14300 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 18:20:32.628987   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1809718102 /var/lib/minikube/kubeconfig
	I0924 18:20:32.636971   14300 exec_runner.go:51] Run: openssl version
	I0924 18:20:32.640636   14300 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 18:20:32.650204   14300 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:20:32.651501   14300 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 24 18:20 /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:20:32.651551   14300 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 18:20:32.654254   14300 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 18:20:32.662317   14300 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 18:20:32.663475   14300 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 18:20:32.663512   14300 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:20:32.663609   14300 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0924 18:20:32.679199   14300 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 18:20:32.689562   14300 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 18:20:32.698074   14300 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0924 18:20:32.717937   14300 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 18:20:32.727352   14300 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 18:20:32.727413   14300 kubeadm.go:157] found existing configuration files:
	
	I0924 18:20:32.727453   14300 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 18:20:32.740272   14300 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 18:20:32.740339   14300 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 18:20:32.748499   14300 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 18:20:32.757292   14300 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 18:20:32.757343   14300 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 18:20:32.765032   14300 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 18:20:32.772667   14300 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 18:20:32.772716   14300 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 18:20:32.780302   14300 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 18:20:32.788280   14300 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 18:20:32.788328   14300 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 18:20:32.796899   14300 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0924 18:20:32.828953   14300 kubeadm.go:310] W0924 18:20:32.828812   15188 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 18:20:32.829554   14300 kubeadm.go:310] W0924 18:20:32.829470   15188 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 18:20:32.831181   14300 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 18:20:32.831208   14300 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 18:20:32.928596   14300 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 18:20:32.928708   14300 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 18:20:32.928718   14300 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 18:20:32.928723   14300 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 18:20:32.939412   14300 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 18:20:32.942448   14300 out.go:235]   - Generating certificates and keys ...
	I0924 18:20:32.942496   14300 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 18:20:32.942511   14300 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 18:20:33.142125   14300 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0924 18:20:33.246053   14300 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0924 18:20:33.349798   14300 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0924 18:20:33.528389   14300 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0924 18:20:33.685249   14300 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0924 18:20:33.685402   14300 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-15] and IPs [10.128.15.240 127.0.0.1 ::1]
	I0924 18:20:33.748847   14300 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0924 18:20:33.748941   14300 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-15] and IPs [10.128.15.240 127.0.0.1 ::1]
	I0924 18:20:33.857774   14300 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0924 18:20:34.260459   14300 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0924 18:20:34.371247   14300 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0924 18:20:34.371426   14300 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 18:20:34.505304   14300 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 18:20:34.702341   14300 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 18:20:34.795989   14300 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 18:20:34.960015   14300 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 18:20:35.048034   14300 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 18:20:35.048557   14300 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 18:20:35.050799   14300 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 18:20:35.053136   14300 out.go:235]   - Booting up control plane ...
	I0924 18:20:35.053180   14300 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 18:20:35.053238   14300 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 18:20:35.054047   14300 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 18:20:35.072183   14300 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 18:20:35.076778   14300 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 18:20:35.076823   14300 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 18:20:35.314582   14300 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 18:20:35.314602   14300 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 18:20:35.816230   14300 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.642377ms
	I0924 18:20:35.816249   14300 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 18:20:40.817854   14300 kubeadm.go:310] [api-check] The API server is healthy after 5.001591456s
	I0924 18:20:40.828299   14300 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 18:20:40.839347   14300 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 18:20:40.857037   14300 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 18:20:40.857061   14300 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-15 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 18:20:40.866385   14300 kubeadm.go:310] [bootstrap-token] Using token: es25ds.wkk9xyr39n6v0kh5
	I0924 18:20:40.867911   14300 out.go:235]   - Configuring RBAC rules ...
	I0924 18:20:40.867944   14300 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 18:20:40.872520   14300 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 18:20:40.879948   14300 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 18:20:40.882745   14300 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 18:20:40.886545   14300 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 18:20:40.889298   14300 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 18:20:41.222965   14300 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 18:20:41.642923   14300 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 18:20:42.223261   14300 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 18:20:42.224151   14300 kubeadm.go:310] 
	I0924 18:20:42.224168   14300 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 18:20:42.224187   14300 kubeadm.go:310] 
	I0924 18:20:42.224191   14300 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 18:20:42.224195   14300 kubeadm.go:310] 
	I0924 18:20:42.224199   14300 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 18:20:42.224203   14300 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 18:20:42.224207   14300 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 18:20:42.224210   14300 kubeadm.go:310] 
	I0924 18:20:42.224213   14300 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 18:20:42.224217   14300 kubeadm.go:310] 
	I0924 18:20:42.224220   14300 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 18:20:42.224224   14300 kubeadm.go:310] 
	I0924 18:20:42.224227   14300 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 18:20:42.224232   14300 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 18:20:42.224236   14300 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 18:20:42.224239   14300 kubeadm.go:310] 
	I0924 18:20:42.224244   14300 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 18:20:42.224251   14300 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 18:20:42.224253   14300 kubeadm.go:310] 
	I0924 18:20:42.224256   14300 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token es25ds.wkk9xyr39n6v0kh5 \
	I0924 18:20:42.224259   14300 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b522f057d37d918ecbd31710ea72a97012c54ea18da00577bb2067f35cdb029b \
	I0924 18:20:42.224265   14300 kubeadm.go:310] 	--control-plane 
	I0924 18:20:42.224268   14300 kubeadm.go:310] 
	I0924 18:20:42.224273   14300 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 18:20:42.224275   14300 kubeadm.go:310] 
	I0924 18:20:42.224280   14300 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token es25ds.wkk9xyr39n6v0kh5 \
	I0924 18:20:42.224283   14300 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b522f057d37d918ecbd31710ea72a97012c54ea18da00577bb2067f35cdb029b 
	I0924 18:20:42.226905   14300 cni.go:84] Creating CNI manager for ""
	I0924 18:20:42.226932   14300 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0924 18:20:42.228871   14300 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0924 18:20:42.230306   14300 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0924 18:20:42.240483   14300 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0924 18:20:42.240638   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube689519287 /etc/cni/net.d/1-k8s.conflist
	I0924 18:20:42.250453   14300 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 18:20:42.250517   14300 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:42.250585   14300 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-15 minikube.k8s.io/updated_at=2024_09_24T18_20_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0924 18:20:42.260059   14300 ops.go:34] apiserver oom_adj: -16
	I0924 18:20:42.317496   14300 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:42.817769   14300 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:43.318581   14300 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:43.818464   14300 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:44.318300   14300 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:44.817882   14300 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:45.317900   14300 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:45.817776   14300 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:46.318598   14300 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:46.817590   14300 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 18:20:46.885178   14300 kubeadm.go:1113] duration metric: took 4.634707874s to wait for elevateKubeSystemPrivileges
	I0924 18:20:46.885330   14300 kubeadm.go:394] duration metric: took 14.221818048s to StartCluster
	I0924 18:20:46.885355   14300 settings.go:142] acquiring lock: {Name:mk8f6bff562e0ddf2834641ab94a61cd415e6791 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:46.885417   14300 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19700-3578/kubeconfig
	I0924 18:20:46.886742   14300 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19700-3578/kubeconfig: {Name:mk9da70165c82a153917b2a82f1453329e941475 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 18:20:46.886985   14300 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0924 18:20:46.887070   14300 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0924 18:20:46.887252   14300 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 18:20:46.887263   14300 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0924 18:20:46.887261   14300 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0924 18:20:46.887311   14300 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0924 18:20:46.887315   14300 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0924 18:20:46.887322   14300 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0924 18:20:46.887309   14300 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0924 18:20:46.887338   14300 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0924 18:20:46.887347   14300 addons.go:69] Setting registry=true in profile "minikube"
	I0924 18:20:46.887348   14300 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0924 18:20:46.887362   14300 addons.go:234] Setting addon registry=true in "minikube"
	I0924 18:20:46.887363   14300 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0924 18:20:46.887369   14300 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0924 18:20:46.887382   14300 host.go:66] Checking if "minikube" exists ...
	I0924 18:20:46.887388   14300 host.go:66] Checking if "minikube" exists ...
	I0924 18:20:46.887395   14300 addons.go:69] Setting volcano=true in profile "minikube"
	I0924 18:20:46.887408   14300 addons.go:234] Setting addon volcano=true in "minikube"
	I0924 18:20:46.887427   14300 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0924 18:20:46.887453   14300 mustload.go:65] Loading cluster: minikube
	I0924 18:20:46.887471   14300 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0924 18:20:46.887490   14300 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0924 18:20:46.887505   14300 host.go:66] Checking if "minikube" exists ...
	I0924 18:20:46.887458   14300 host.go:66] Checking if "minikube" exists ...
	I0924 18:20:46.887329   14300 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0924 18:20:46.887337   14300 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0924 18:20:46.888165   14300 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0924 18:20:46.888173   14300 host.go:66] Checking if "minikube" exists ...
	I0924 18:20:46.888225   14300 host.go:66] Checking if "minikube" exists ...
	I0924 18:20:46.888874   14300 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0924 18:20:46.888904   14300 api_server.go:166] Checking apiserver status ...
	I0924 18:20:46.888946   14300 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0924 18:20:46.888951   14300 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:20:46.888954   14300 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0924 18:20:46.888962   14300 api_server.go:166] Checking apiserver status ...
	I0924 18:20:46.888970   14300 api_server.go:166] Checking apiserver status ...
	I0924 18:20:46.888999   14300 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:20:46.889001   14300 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:20:46.889144   14300 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0924 18:20:46.889161   14300 api_server.go:166] Checking apiserver status ...
	I0924 18:20:46.889216   14300 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:20:46.889287   14300 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0924 18:20:46.889317   14300 api_server.go:166] Checking apiserver status ...
	I0924 18:20:46.889361   14300 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:20:46.887339   14300 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0924 18:20:46.889971   14300 host.go:66] Checking if "minikube" exists ...
	I0924 18:20:46.887263   14300 addons.go:69] Setting yakd=true in profile "minikube"
	I0924 18:20:46.889413   14300 out.go:177] * Configuring local host environment ...
	I0924 18:20:46.887329   14300 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0924 18:20:46.890340   14300 host.go:66] Checking if "minikube" exists ...
	I0924 18:20:46.889482   14300 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 18:20:46.891026   14300 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0924 18:20:46.891051   14300 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0924 18:20:46.891064   14300 api_server.go:166] Checking apiserver status ...
	I0924 18:20:46.891068   14300 api_server.go:166] Checking apiserver status ...
	I0924 18:20:46.891104   14300 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:20:46.891115   14300 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:20:46.887363   14300 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0924 18:20:46.891470   14300 host.go:66] Checking if "minikube" exists ...
	I0924 18:20:46.890110   14300 addons.go:234] Setting addon yakd=true in "minikube"
	I0924 18:20:46.892093   14300 host.go:66] Checking if "minikube" exists ...
	W0924 18:20:46.892156   14300 out.go:270] * 
	W0924 18:20:46.892183   14300 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0924 18:20:46.892192   14300 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0924 18:20:46.892204   14300 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0924 18:20:46.892211   14300 out.go:270] * 
	W0924 18:20:46.892621   14300 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0924 18:20:46.892647   14300 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	I0924 18:20:46.893181   14300 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0924 18:20:46.893206   14300 api_server.go:166] Checking apiserver status ...
	I0924 18:20:46.893245   14300 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0924 18:20:46.893267   14300 api_server.go:166] Checking apiserver status ...
	I0924 18:20:46.893309   14300 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:20:46.893249   14300 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:20:46.893595   14300 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0924 18:20:46.893613   14300 api_server.go:166] Checking apiserver status ...
	I0924 18:20:46.893678   14300 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0924 18:20:46.893690   14300 api_server.go:166] Checking apiserver status ...
	I0924 18:20:46.893722   14300 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0924 18:20:46.893255   14300 out.go:270] * 
	I0924 18:20:46.893828   14300 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0924 18:20:46.893928   14300 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0924 18:20:46.893953   14300 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0924 18:20:46.893965   14300 out.go:270] * 
	W0924 18:20:46.893980   14300 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0924 18:20:46.894029   14300 start.go:235] Will wait 6m0s for node &{Name: IP:10.128.15.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0924 18:20:46.898418   14300 out.go:177] * Verifying Kubernetes components...
	I0924 18:20:46.899916   14300 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0924 18:20:46.908262   14300 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15621/cgroup
	I0924 18:20:46.908550   14300 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15621/cgroup
	I0924 18:20:46.908704   14300 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15621/cgroup
	I0924 18:20:46.908722   14300 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15621/cgroup
	I0924 18:20:46.924415   14300 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0924 18:20:46.924449   14300 api_server.go:166] Checking apiserver status ...
	I0924 18:20:46.924459   14300 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0924 18:20:46.924480   14300 api_server.go:166] Checking apiserver status ...
	I0924 18:20:46.924496   14300 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:20:46.924514   14300 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:20:46.927635   14300 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15621/cgroup
	I0924 18:20:46.930481   14300 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15621/cgroup
	I0924 18:20:46.939232   14300 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc"
	I0924 18:20:46.939300   14300 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc/freezer.state
	I0924 18:20:46.957523   14300 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc"
	I0924 18:20:46.957594   14300 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc/freezer.state
	I0924 18:20:46.965778   14300 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15621/cgroup
	I0924 18:20:46.965832   14300 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc"
	I0924 18:20:46.965884   14300 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc/freezer.state
	I0924 18:20:46.973407   14300 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc"
	I0924 18:20:46.973491   14300 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc/freezer.state
	I0924 18:20:46.976114   14300 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc"
	I0924 18:20:46.976224   14300 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc/freezer.state
	I0924 18:20:46.978620   14300 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15621/cgroup
	I0924 18:20:46.980671   14300 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15621/cgroup
	I0924 18:20:46.981608   14300 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15621/cgroup
	I0924 18:20:46.981804   14300 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15621/cgroup
	I0924 18:20:46.983439   14300 api_server.go:204] freezer state: "THAWED"
	I0924 18:20:46.983471   14300 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0924 18:20:46.985800   14300 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc"
	I0924 18:20:46.985853   14300 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc/freezer.state
	I0924 18:20:46.986658   14300 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15621/cgroup
	I0924 18:20:46.988745   14300 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0924 18:20:46.990593   14300 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0924 18:20:46.991837   14300 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0924 18:20:46.991867   14300 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0924 18:20:46.992025   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3391655368 /etc/kubernetes/addons/ig-namespace.yaml
	I0924 18:20:46.997389   14300 api_server.go:204] freezer state: "THAWED"
	I0924 18:20:46.997552   14300 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0924 18:20:46.999031   14300 api_server.go:204] freezer state: "THAWED"
	I0924 18:20:46.999052   14300 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0924 18:20:47.002404   14300 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0924 18:20:47.005006   14300 api_server.go:204] freezer state: "THAWED"
	I0924 18:20:47.005028   14300 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0924 18:20:47.005535   14300 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0924 18:20:47.005929   14300 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc"
	I0924 18:20:47.005974   14300 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc/freezer.state
	I0924 18:20:47.006200   14300 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0924 18:20:47.007897   14300 out.go:177]   - Using image docker.io/registry:2.8.3
	I0924 18:20:47.008400   14300 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0924 18:20:47.010371   14300 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0924 18:20:47.010719   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1455590416 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0924 18:20:47.010894   14300 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc"
	I0924 18:20:47.010932   14300 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc/freezer.state
	I0924 18:20:47.011224   14300 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0924 18:20:47.012585   14300 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0924 18:20:47.012609   14300 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0924 18:20:47.012752   14300 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc"
	I0924 18:20:47.012780   14300 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc/freezer.state
	I0924 18:20:47.012715   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube697605746 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0924 18:20:47.015304   14300 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0924 18:20:47.015591   14300 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15621/cgroup
	I0924 18:20:47.017095   14300 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc"
	I0924 18:20:47.017140   14300 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc/freezer.state
	I0924 18:20:47.017418   14300 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0924 18:20:47.017479   14300 host.go:66] Checking if "minikube" exists ...
	I0924 18:20:47.018005   14300 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0924 18:20:47.018036   14300 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0924 18:20:47.018200   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube563370708 /etc/kubernetes/addons/registry-rc.yaml
	I0924 18:20:47.019016   14300 api_server.go:204] freezer state: "THAWED"
	I0924 18:20:47.019036   14300 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0924 18:20:47.019532   14300 api_server.go:204] freezer state: "THAWED"
	I0924 18:20:47.019548   14300 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0924 18:20:47.020314   14300 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0924 18:20:47.020336   14300 api_server.go:166] Checking apiserver status ...
	I0924 18:20:47.020365   14300 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:20:47.024057   14300 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0924 18:20:47.025968   14300 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0924 18:20:47.026269   14300 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0924 18:20:47.027652   14300 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0924 18:20:47.029070   14300 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 18:20:47.029169   14300 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 18:20:47.029408   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3775144401 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 18:20:47.029617   14300 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0924 18:20:47.031199   14300 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0924 18:20:47.032903   14300 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0924 18:20:47.034212   14300 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0924 18:20:47.035814   14300 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0924 18:20:47.037668   14300 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0924 18:20:47.039044   14300 api_server.go:204] freezer state: "THAWED"
	I0924 18:20:47.039066   14300 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0924 18:20:47.039100   14300 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0924 18:20:47.039431   14300 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0924 18:20:47.039452   14300 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0924 18:20:47.039599   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2066588412 /etc/kubernetes/addons/ig-role.yaml
	I0924 18:20:47.040183   14300 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc"
	I0924 18:20:47.040236   14300 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc/freezer.state
	I0924 18:20:47.045305   14300 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0924 18:20:47.045339   14300 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0924 18:20:47.045466   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3035086185 /etc/kubernetes/addons/registry-svc.yaml
	I0924 18:20:47.046994   14300 api_server.go:204] freezer state: "THAWED"
	I0924 18:20:47.047027   14300 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0924 18:20:47.048499   14300 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0924 18:20:47.052440   14300 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0924 18:20:47.053238   14300 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 18:20:47.053264   14300 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0924 18:20:47.053393   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1514335755 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 18:20:47.053628   14300 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 18:20:47.053792   14300 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0924 18:20:47.054727   14300 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0924 18:20:47.054780   14300 host.go:66] Checking if "minikube" exists ...
	I0924 18:20:47.055391   14300 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0924 18:20:47.055403   14300 api_server.go:166] Checking apiserver status ...
	I0924 18:20:47.055433   14300 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:20:47.056445   14300 api_server.go:204] freezer state: "THAWED"
	I0924 18:20:47.056466   14300 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0924 18:20:47.056649   14300 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0924 18:20:47.056679   14300 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0924 18:20:47.056828   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2194174923 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0924 18:20:47.058296   14300 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 18:20:47.058322   14300 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0924 18:20:47.058332   14300 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 18:20:47.058378   14300 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 18:20:47.062233   14300 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0924 18:20:47.064136   14300 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0924 18:20:47.065318   14300 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0924 18:20:47.065675   14300 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0924 18:20:47.065705   14300 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0924 18:20:47.065869   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube107507482 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0924 18:20:47.069399   14300 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc"
	I0924 18:20:47.069458   14300 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc/freezer.state
	I0924 18:20:47.070853   14300 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc"
	I0924 18:20:47.070904   14300 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc/freezer.state
	I0924 18:20:47.072883   14300 api_server.go:204] freezer state: "THAWED"
	I0924 18:20:47.072909   14300 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0924 18:20:47.073845   14300 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0924 18:20:47.073869   14300 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0924 18:20:47.073979   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2165954903 /etc/kubernetes/addons/registry-proxy.yaml
	I0924 18:20:47.077620   14300 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15621/cgroup
	I0924 18:20:47.077761   14300 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0924 18:20:47.077785   14300 host.go:66] Checking if "minikube" exists ...
	I0924 18:20:47.082732   14300 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0924 18:20:47.082758   14300 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0924 18:20:47.083226   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2876023926 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0924 18:20:47.083474   14300 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0924 18:20:47.083503   14300 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0924 18:20:47.083630   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube497476710 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0924 18:20:47.090190   14300 api_server.go:204] freezer state: "THAWED"
	I0924 18:20:47.090215   14300 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0924 18:20:47.095534   14300 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0924 18:20:47.097047   14300 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15621/cgroup
	I0924 18:20:47.104489   14300 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0924 18:20:47.113087   14300 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0924 18:20:47.114596   14300 api_server.go:204] freezer state: "THAWED"
	I0924 18:20:47.114621   14300 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0924 18:20:47.115144   14300 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0924 18:20:47.116562   14300 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 18:20:47.116997   14300 api_server.go:204] freezer state: "THAWED"
	I0924 18:20:47.117016   14300 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0924 18:20:47.120088   14300 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0924 18:20:47.122929   14300 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0924 18:20:47.123129   14300 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0924 18:20:47.124532   14300 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0924 18:20:47.126169   14300 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc"
	I0924 18:20:47.126356   14300 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc/freezer.state
	I0924 18:20:47.126246   14300 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0924 18:20:47.126294   14300 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0924 18:20:47.129425   14300 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0924 18:20:47.133909   14300 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0924 18:20:47.135137   14300 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 18:20:47.135168   14300 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I0924 18:20:47.135177   14300 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0924 18:20:47.135323   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1465757769 /etc/kubernetes/addons/yakd-ns.yaml
	I0924 18:20:47.135311   14300 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc"
	I0924 18:20:47.135368   14300 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc/freezer.state
	I0924 18:20:47.135323   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2607217975 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 18:20:47.135771   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2447347870 /etc/kubernetes/addons/volcano-deployment.yaml
	I0924 18:20:47.135140   14300 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0924 18:20:47.135943   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3031862081 /etc/kubernetes/addons/deployment.yaml
	I0924 18:20:47.136821   14300 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0924 18:20:47.136842   14300 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0924 18:20:47.136933   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2305882032 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0924 18:20:47.137377   14300 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0924 18:20:47.137406   14300 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0924 18:20:47.137516   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2513045564 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0924 18:20:47.138306   14300 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 18:20:47.139087   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3074484672 /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 18:20:47.141137   14300 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0924 18:20:47.141215   14300 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0924 18:20:47.141320   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3299013370 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0924 18:20:47.160703   14300 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 18:20:47.167470   14300 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0924 18:20:47.167968   14300 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0924 18:20:47.168192   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1302839583 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0924 18:20:47.189159   14300 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0924 18:20:47.189615   14300 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0924 18:20:47.189953   14300 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0924 18:20:47.189985   14300 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0924 18:20:47.190038   14300 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0924 18:20:47.190107   14300 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0924 18:20:47.190126   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4057794046 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0924 18:20:47.190207   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2442467937 /etc/kubernetes/addons/yakd-sa.yaml
	I0924 18:20:47.196059   14300 api_server.go:204] freezer state: "THAWED"
	I0924 18:20:47.196084   14300 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0924 18:20:47.196176   14300 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0924 18:20:47.196202   14300 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0924 18:20:47.196332   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube287138083 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0924 18:20:47.200891   14300 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0924 18:20:47.203609   14300 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0924 18:20:47.205115   14300 out.go:177]   - Using image docker.io/busybox:stable
	I0924 18:20:47.206639   14300 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0924 18:20:47.206673   14300 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0924 18:20:47.206822   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1524470867 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0924 18:20:47.211041   14300 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0924 18:20:47.211064   14300 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0924 18:20:47.211172   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2278916604 /etc/kubernetes/addons/yakd-crb.yaml
	I0924 18:20:47.211336   14300 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0924 18:20:47.211350   14300 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0924 18:20:47.211438   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube170199342 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0924 18:20:47.211590   14300 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 18:20:47.211603   14300 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 18:20:47.211682   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1842097473 /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 18:20:47.213333   14300 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0924 18:20:47.213357   14300 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0924 18:20:47.213463   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1887479542 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0924 18:20:47.225114   14300 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0924 18:20:47.225149   14300 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0924 18:20:47.225284   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2401356663 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0924 18:20:47.234991   14300 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0924 18:20:47.235016   14300 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0924 18:20:47.235113   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3637690094 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0924 18:20:47.235667   14300 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0924 18:20:47.235690   14300 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0924 18:20:47.235809   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2843280135 /etc/kubernetes/addons/yakd-svc.yaml
	I0924 18:20:47.257578   14300 api_server.go:204] freezer state: "THAWED"
	I0924 18:20:47.257618   14300 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0924 18:20:47.262614   14300 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0924 18:20:47.262661   14300 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 18:20:47.262681   14300 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0924 18:20:47.262689   14300 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0924 18:20:47.262742   14300 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0924 18:20:47.267796   14300 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 18:20:47.268540   14300 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0924 18:20:47.279522   14300 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0924 18:20:47.280797   14300 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0924 18:20:47.280832   14300 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0924 18:20:47.280972   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2331814242 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0924 18:20:47.290866   14300 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0924 18:20:47.290899   14300 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0924 18:20:47.292047   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4241954388 /etc/kubernetes/addons/yakd-dp.yaml
	I0924 18:20:47.298487   14300 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 18:20:47.298629   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3958616854 /etc/kubernetes/addons/storageclass.yaml
	I0924 18:20:47.317570   14300 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0924 18:20:47.317611   14300 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0924 18:20:47.317748   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4075937187 /etc/kubernetes/addons/ig-crd.yaml
	I0924 18:20:47.331298   14300 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0924 18:20:47.335176   14300 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0924 18:20:47.335214   14300 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0924 18:20:47.335363   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3812801102 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0924 18:20:47.353725   14300 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 18:20:47.359161   14300 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0924 18:20:47.359203   14300 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0924 18:20:47.359350   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1805964076 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0924 18:20:47.390189   14300 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0924 18:20:47.393066   14300 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0924 18:20:47.393104   14300 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0924 18:20:47.393254   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3888679132 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0924 18:20:47.401812   14300 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0924 18:20:47.401847   14300 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0924 18:20:47.402007   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube937600030 /etc/kubernetes/addons/ig-daemonset.yaml
	I0924 18:20:47.450925   14300 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0924 18:20:47.450964   14300 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0924 18:20:47.451129   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube661908719 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0924 18:20:47.499865   14300 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0924 18:20:47.499904   14300 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0924 18:20:47.500041   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube678518960 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0924 18:20:47.509209   14300 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0924 18:20:47.549205   14300 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-15" to be "Ready" ...
	I0924 18:20:47.550125   14300 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0924 18:20:47.552630   14300 node_ready.go:49] node "ubuntu-20-agent-15" has status "Ready":"True"
	I0924 18:20:47.552653   14300 node_ready.go:38] duration metric: took 3.409395ms for node "ubuntu-20-agent-15" to be "Ready" ...
	I0924 18:20:47.552664   14300 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 18:20:47.592321   14300 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qf5sh" in "kube-system" namespace to be "Ready" ...
	I0924 18:20:47.662438   14300 addons.go:475] Verifying addon registry=true in "minikube"
	I0924 18:20:47.671067   14300 out.go:177] * Verifying registry addon...
	I0924 18:20:47.678268   14300 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0924 18:20:47.681853   14300 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0924 18:20:47.682008   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:20:47.694958   14300 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0924 18:20:48.184546   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:20:48.208513   14300 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0924 18:20:48.532185   14300 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.25259519s)
	I0924 18:20:48.606285   14300 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.274937867s)
	I0924 18:20:48.619730   14300 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0924 18:20:48.682340   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:20:48.686896   14300 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.177624319s)
	I0924 18:20:48.756443   14300 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.595695207s)
	I0924 18:20:48.898808   14300 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.630963034s)
	I0924 18:20:48.898852   14300 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0924 18:20:49.105007   14300 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.836399387s)
	W0924 18:20:49.105058   14300 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0924 18:20:49.105084   14300 retry.go:31] will retry after 189.613983ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0924 18:20:49.183726   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:20:49.295315   14300 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0924 18:20:49.603733   14300 pod_ready.go:103] pod "coredns-7c65d6cfc9-qf5sh" in "kube-system" namespace has status "Ready":"False"
	I0924 18:20:49.685666   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:20:50.187348   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:20:50.245380   14300 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.695196958s)
	I0924 18:20:50.245417   14300 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0924 18:20:50.250814   14300 out.go:177] * Verifying csi-hostpath-driver addon...
	I0924 18:20:50.257213   14300 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0924 18:20:50.286486   14300 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0924 18:20:50.286511   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:20:50.307716   14300 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.118064065s)
	I0924 18:20:50.684113   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:20:50.785616   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:20:51.182923   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:20:51.262641   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:20:51.684435   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:20:51.784527   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:20:52.038939   14300 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.743572179s)
	I0924 18:20:52.099112   14300 pod_ready.go:103] pod "coredns-7c65d6cfc9-qf5sh" in "kube-system" namespace has status "Ready":"False"
	I0924 18:20:52.182475   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:20:52.261929   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:20:52.682325   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:20:52.784037   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:20:53.182601   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:20:53.262483   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:20:53.682920   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:20:53.783736   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:20:54.087205   14300 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0924 18:20:54.087327   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2730370245 /var/lib/minikube/google_application_credentials.json
	I0924 18:20:54.102049   14300 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0924 18:20:54.102165   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3572675445 /var/lib/minikube/google_cloud_project
	I0924 18:20:54.114624   14300 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0924 18:20:54.114684   14300 host.go:66] Checking if "minikube" exists ...
	I0924 18:20:54.115403   14300 kubeconfig.go:125] found "minikube" server: "https://10.128.15.240:8443"
	I0924 18:20:54.115426   14300 api_server.go:166] Checking apiserver status ...
	I0924 18:20:54.115464   14300 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:20:54.136767   14300 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/15621/cgroup
	I0924 18:20:54.150914   14300 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc"
	I0924 18:20:54.150988   14300 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda518cad12c0aadb1302a19f43f86065d/dbdab20d402afb189b336765be57492e710f0de47c2ff4f7aa326354b88d46dc/freezer.state
	I0924 18:20:54.162101   14300 api_server.go:204] freezer state: "THAWED"
	I0924 18:20:54.162134   14300 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0924 18:20:54.167466   14300 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0924 18:20:54.167542   14300 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0924 18:20:54.170885   14300 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0924 18:20:54.172249   14300 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0924 18:20:54.173545   14300 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0924 18:20:54.173586   14300 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0924 18:20:54.173763   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1619973819 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0924 18:20:54.182693   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:20:54.185628   14300 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0924 18:20:54.185677   14300 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0924 18:20:54.185889   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3042757817 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0924 18:20:54.197219   14300 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0924 18:20:54.197257   14300 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0924 18:20:54.197413   14300 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2857024435 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0924 18:20:54.210393   14300 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0924 18:20:54.261396   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:20:54.750525   14300 pod_ready.go:93] pod "coredns-7c65d6cfc9-qf5sh" in "kube-system" namespace has status "Ready":"True"
	I0924 18:20:54.750552   14300 pod_ready.go:82] duration metric: took 7.157178661s for pod "coredns-7c65d6cfc9-qf5sh" in "kube-system" namespace to be "Ready" ...
	I0924 18:20:54.750565   14300 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x5smh" in "kube-system" namespace to be "Ready" ...
	I0924 18:20:54.767175   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:20:54.767851   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:20:54.768740   14300 pod_ready.go:93] pod "coredns-7c65d6cfc9-x5smh" in "kube-system" namespace has status "Ready":"True"
	I0924 18:20:54.768767   14300 pod_ready.go:82] duration metric: took 18.192917ms for pod "coredns-7c65d6cfc9-x5smh" in "kube-system" namespace to be "Ready" ...
	I0924 18:20:54.768780   14300 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
	I0924 18:20:54.845250   14300 pod_ready.go:93] pod "etcd-ubuntu-20-agent-15" in "kube-system" namespace has status "Ready":"True"
	I0924 18:20:54.845277   14300 pod_ready.go:82] duration metric: took 76.486941ms for pod "etcd-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
	I0924 18:20:54.845290   14300 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
	I0924 18:20:55.280599   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:20:55.281069   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:20:55.435114   14300 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-15" in "kube-system" namespace has status "Ready":"True"
	I0924 18:20:55.435142   14300 pod_ready.go:82] duration metric: took 589.844008ms for pod "kube-apiserver-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
	I0924 18:20:55.435155   14300 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
	I0924 18:20:55.503102   14300 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.292658586s)
	I0924 18:20:55.504210   14300 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0924 18:20:55.506658   14300 out.go:177] * Verifying gcp-auth addon...
	I0924 18:20:55.509307   14300 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0924 18:20:55.512549   14300 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0924 18:20:55.684792   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:20:55.786420   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:20:56.181873   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:20:56.261286   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:20:56.440755   14300 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-15" in "kube-system" namespace has status "Ready":"True"
	I0924 18:20:56.440783   14300 pod_ready.go:82] duration metric: took 1.005616524s for pod "kube-controller-manager-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
	I0924 18:20:56.440796   14300 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b5gd8" in "kube-system" namespace to be "Ready" ...
	I0924 18:20:56.445174   14300 pod_ready.go:93] pod "kube-proxy-b5gd8" in "kube-system" namespace has status "Ready":"True"
	I0924 18:20:56.445196   14300 pod_ready.go:82] duration metric: took 4.391833ms for pod "kube-proxy-b5gd8" in "kube-system" namespace to be "Ready" ...
	I0924 18:20:56.445207   14300 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
	I0924 18:20:56.596590   14300 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-15" in "kube-system" namespace has status "Ready":"True"
	I0924 18:20:56.596614   14300 pod_ready.go:82] duration metric: took 151.398494ms for pod "kube-scheduler-ubuntu-20-agent-15" in "kube-system" namespace to be "Ready" ...
	I0924 18:20:56.596631   14300 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-w5fqh" in "kube-system" namespace to be "Ready" ...
	I0924 18:20:56.683029   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:20:56.761477   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:20:56.996941   14300 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-w5fqh" in "kube-system" namespace has status "Ready":"True"
	I0924 18:20:56.996962   14300 pod_ready.go:82] duration metric: took 400.323541ms for pod "nvidia-device-plugin-daemonset-w5fqh" in "kube-system" namespace to be "Ready" ...
	I0924 18:20:56.996971   14300 pod_ready.go:39] duration metric: took 9.444296322s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 18:20:56.996986   14300 api_server.go:52] waiting for apiserver process to appear ...
	I0924 18:20:56.997030   14300 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 18:20:57.015891   14300 api_server.go:72] duration metric: took 10.121823595s to wait for apiserver process to appear ...
	I0924 18:20:57.015915   14300 api_server.go:88] waiting for apiserver healthz status ...
	I0924 18:20:57.015933   14300 api_server.go:253] Checking apiserver healthz at https://10.128.15.240:8443/healthz ...
	I0924 18:20:57.019305   14300 api_server.go:279] https://10.128.15.240:8443/healthz returned 200:
	ok
	I0924 18:20:57.020193   14300 api_server.go:141] control plane version: v1.31.1
	I0924 18:20:57.020218   14300 api_server.go:131] duration metric: took 4.296098ms to wait for apiserver health ...
	I0924 18:20:57.020228   14300 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 18:20:57.181943   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:20:57.201612   14300 system_pods.go:59] 16 kube-system pods found
	I0924 18:20:57.201641   14300 system_pods.go:61] "coredns-7c65d6cfc9-qf5sh" [8e4da0cd-72e7-4f6b-89cb-0760337d2718] Running
	I0924 18:20:57.201648   14300 system_pods.go:61] "csi-hostpath-attacher-0" [cdfcc78e-f456-4a05-8b10-8acd9dc3c578] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0924 18:20:57.201657   14300 system_pods.go:61] "csi-hostpath-resizer-0" [511f9e00-bc7b-4f36-9e63-7046d4b5af4f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0924 18:20:57.201665   14300 system_pods.go:61] "csi-hostpathplugin-56tsc" [ae224c9b-0040-4907-8970-d7ae80ccab8c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0924 18:20:57.201669   14300 system_pods.go:61] "etcd-ubuntu-20-agent-15" [6f1a62dd-dbd6-4c43-b2c2-b8b374c43804] Running
	I0924 18:20:57.201673   14300 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-15" [967bf846-5d94-415d-885d-b11eef04a4c0] Running
	I0924 18:20:57.201677   14300 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-15" [5667cc10-f78f-4b63-bfaf-ae48f23a02ac] Running
	I0924 18:20:57.201680   14300 system_pods.go:61] "kube-proxy-b5gd8" [9f915b8a-a07d-4000-805a-a8724827e48f] Running
	I0924 18:20:57.201685   14300 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-15" [1e4726f0-74de-476e-afef-3027b2bf60e5] Running
	I0924 18:20:57.201690   14300 system_pods.go:61] "metrics-server-84c5f94fbc-44qj8" [2ed2f7f7-673f-4480-9ab4-24f463eda2db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 18:20:57.201697   14300 system_pods.go:61] "nvidia-device-plugin-daemonset-w5fqh" [662002ba-ca76-4e7f-97f4-416f0ec02c9e] Running
	I0924 18:20:57.201702   14300 system_pods.go:61] "registry-66c9cd494c-hngnq" [12f00669-2ddf-46ee-94c2-081f0f063e2f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0924 18:20:57.201709   14300 system_pods.go:61] "registry-proxy-qvdm8" [9f8ff49d-1599-4142-892a-bb601f73001a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0924 18:20:57.201716   14300 system_pods.go:61] "snapshot-controller-56fcc65765-f5v97" [7f3a1574-2372-4834-a424-5c19bfc68120] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 18:20:57.201722   14300 system_pods.go:61] "snapshot-controller-56fcc65765-lbrrs" [5a62e3ae-bbd9-4fd2-b292-67044c6b51b2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 18:20:57.201728   14300 system_pods.go:61] "storage-provisioner" [6303c0d3-05f6-42eb-ad57-55d8e6c33d00] Running
	I0924 18:20:57.201735   14300 system_pods.go:74] duration metric: took 181.502302ms to wait for pod list to return data ...
	I0924 18:20:57.201744   14300 default_sa.go:34] waiting for default service account to be created ...
	I0924 18:20:57.261033   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:20:57.396853   14300 default_sa.go:45] found service account: "default"
	I0924 18:20:57.396883   14300 default_sa.go:55] duration metric: took 195.133295ms for default service account to be created ...
	I0924 18:20:57.396892   14300 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 18:20:57.618370   14300 system_pods.go:86] 16 kube-system pods found
	I0924 18:20:57.618395   14300 system_pods.go:89] "coredns-7c65d6cfc9-qf5sh" [8e4da0cd-72e7-4f6b-89cb-0760337d2718] Running
	I0924 18:20:57.618403   14300 system_pods.go:89] "csi-hostpath-attacher-0" [cdfcc78e-f456-4a05-8b10-8acd9dc3c578] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0924 18:20:57.618409   14300 system_pods.go:89] "csi-hostpath-resizer-0" [511f9e00-bc7b-4f36-9e63-7046d4b5af4f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0924 18:20:57.618417   14300 system_pods.go:89] "csi-hostpathplugin-56tsc" [ae224c9b-0040-4907-8970-d7ae80ccab8c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0924 18:20:57.618421   14300 system_pods.go:89] "etcd-ubuntu-20-agent-15" [6f1a62dd-dbd6-4c43-b2c2-b8b374c43804] Running
	I0924 18:20:57.618425   14300 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-15" [967bf846-5d94-415d-885d-b11eef04a4c0] Running
	I0924 18:20:57.618428   14300 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-15" [5667cc10-f78f-4b63-bfaf-ae48f23a02ac] Running
	I0924 18:20:57.618431   14300 system_pods.go:89] "kube-proxy-b5gd8" [9f915b8a-a07d-4000-805a-a8724827e48f] Running
	I0924 18:20:57.618435   14300 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-15" [1e4726f0-74de-476e-afef-3027b2bf60e5] Running
	I0924 18:20:57.618439   14300 system_pods.go:89] "metrics-server-84c5f94fbc-44qj8" [2ed2f7f7-673f-4480-9ab4-24f463eda2db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 18:20:57.618448   14300 system_pods.go:89] "nvidia-device-plugin-daemonset-w5fqh" [662002ba-ca76-4e7f-97f4-416f0ec02c9e] Running
	I0924 18:20:57.618454   14300 system_pods.go:89] "registry-66c9cd494c-hngnq" [12f00669-2ddf-46ee-94c2-081f0f063e2f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0924 18:20:57.618459   14300 system_pods.go:89] "registry-proxy-qvdm8" [9f8ff49d-1599-4142-892a-bb601f73001a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0924 18:20:57.618482   14300 system_pods.go:89] "snapshot-controller-56fcc65765-f5v97" [7f3a1574-2372-4834-a424-5c19bfc68120] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 18:20:57.618493   14300 system_pods.go:89] "snapshot-controller-56fcc65765-lbrrs" [5a62e3ae-bbd9-4fd2-b292-67044c6b51b2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 18:20:57.618497   14300 system_pods.go:89] "storage-provisioner" [6303c0d3-05f6-42eb-ad57-55d8e6c33d00] Running
	I0924 18:20:57.618510   14300 system_pods.go:126] duration metric: took 221.606344ms to wait for k8s-apps to be running ...
	I0924 18:20:57.618519   14300 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 18:20:57.618563   14300 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0924 18:20:57.632223   14300 system_svc.go:56] duration metric: took 13.693873ms WaitForService to wait for kubelet
	I0924 18:20:57.632248   14300 kubeadm.go:582] duration metric: took 10.738185536s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 18:20:57.632265   14300 node_conditions.go:102] verifying NodePressure condition ...
	I0924 18:20:57.682140   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:20:57.763290   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:20:57.796660   14300 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0924 18:20:57.796694   14300 node_conditions.go:123] node cpu capacity is 8
	I0924 18:20:57.796708   14300 node_conditions.go:105] duration metric: took 164.438324ms to run NodePressure ...
	I0924 18:20:57.796720   14300 start.go:241] waiting for startup goroutines ...
	I0924 18:20:58.182848   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:20:58.261708   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:20:58.682291   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:20:58.761710   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:20:59.182084   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:20:59.261613   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:20:59.684122   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:20:59.761979   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:00.181795   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:00.261328   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:00.682131   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:00.761869   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:01.181569   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:01.261480   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:01.681975   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 18:21:01.761131   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:02.182874   14300 kapi.go:107] duration metric: took 14.504603418s to wait for kubernetes.io/minikube-addons=registry ...
	I0924 18:21:02.261752   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:02.762671   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:03.260825   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:03.762137   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:04.261376   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:04.761906   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:05.261065   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:05.761929   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:06.262038   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:06.762204   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:07.262127   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:07.761572   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:08.261273   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:08.761853   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:09.262350   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:09.762320   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:10.261285   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:10.761665   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:11.261845   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:11.761508   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:12.262333   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:12.762278   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:13.261522   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:13.760845   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:14.261583   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:14.760544   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:15.262390   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:15.761792   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:16.350499   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:16.761278   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:17.261160   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:17.761977   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:18.262228   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:18.766411   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:19.262954   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:19.761072   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:20.261450   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:20.761837   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:21.263452   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:21.763506   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:22.264582   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:22.760879   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:23.261469   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 18:21:23.761629   14300 kapi.go:107] duration metric: took 33.504412809s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0924 18:21:37.012888   14300 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0924 18:21:37.012957   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:37.513294   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:38.012520   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:38.513015   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:39.012922   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:39.512486   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:40.012979   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:40.513232   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:41.012484   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:41.512593   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:42.013003   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:42.512855   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:43.013278   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:43.513729   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:44.012641   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:44.513219   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:45.012125   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:45.513521   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:46.012312   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:46.512166   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:47.012333   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:47.512373   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:48.012748   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:48.512588   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:49.012291   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:49.512551   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:50.012945   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:50.513076   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:51.013310   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:51.512182   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:52.012259   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:52.512322   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:53.012331   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:53.512118   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:54.013098   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:54.512632   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:55.012496   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:55.512722   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:56.013091   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:56.513323   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:57.012073   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:57.511900   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:58.012733   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:58.512915   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:59.013240   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:21:59.513240   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:00.013387   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:00.512511   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:01.012770   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:01.513173   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:02.013119   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:02.512789   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:03.012810   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:03.513402   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:04.012162   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:04.512257   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:05.012027   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:05.513562   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:06.012711   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:06.512782   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:07.013121   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:07.512170   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:08.012384   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:08.512401   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:09.012657   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:09.512534   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:10.012796   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:10.512633   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:11.012594   14300 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 18:22:11.512713   14300 kapi.go:107] duration metric: took 1m16.003406546s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0924 18:22:11.514689   14300 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0924 18:22:11.516188   14300 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0924 18:22:11.517640   14300 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0924 18:22:11.519252   14300 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, storage-provisioner-rancher, yakd, inspektor-gadget, storage-provisioner, metrics-server, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0924 18:22:11.520719   14300 addons.go:510] duration metric: took 1m24.633654503s for enable addons: enabled=[nvidia-device-plugin cloud-spanner default-storageclass storage-provisioner-rancher yakd inspektor-gadget storage-provisioner metrics-server volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0924 18:22:11.520759   14300 start.go:246] waiting for cluster config update ...
	I0924 18:22:11.520777   14300 start.go:255] writing updated cluster config ...
	I0924 18:22:11.521046   14300 exec_runner.go:51] Run: rm -f paused
	I0924 18:22:11.567291   14300 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 18:22:11.569201   14300 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Sat 2024-08-24 19:10:57 UTC, end at Tue 2024-09-24 18:32:04 UTC. --
	Sep 24 18:24:18 ubuntu-20-agent-15 dockerd[14530]: time="2024-09-24T18:24:18.757279056Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=b813938150ade855 traceID=aa98f5c0a5c56587de8188644be151cc
	Sep 24 18:24:18 ubuntu-20-agent-15 dockerd[14530]: time="2024-09-24T18:24:18.759916437Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=b813938150ade855 traceID=aa98f5c0a5c56587de8188644be151cc
	Sep 24 18:24:24 ubuntu-20-agent-15 cri-dockerd[14859]: time="2024-09-24T18:24:24Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 24 18:24:26 ubuntu-20-agent-15 dockerd[14530]: time="2024-09-24T18:24:26.126328523Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 24 18:24:26 ubuntu-20-agent-15 dockerd[14530]: time="2024-09-24T18:24:26.126360258Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 24 18:24:26 ubuntu-20-agent-15 dockerd[14530]: time="2024-09-24T18:24:26.128243265Z" level=error msg="Error running exec f50697890252e0e0d6fd156eb0f6cf8d8e7ffc14096c4ed5c536bc83b08e6113 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" spanID=36c2e686660572a5 traceID=68dff2ef0f98d9a4823b00498bbc3253
	Sep 24 18:24:26 ubuntu-20-agent-15 dockerd[14530]: time="2024-09-24T18:24:26.399842856Z" level=info msg="ignoring event" container=e236cabc31a3959700f02d96313c783a26618acbda3ad100e7b88c9ddff93854 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:25:39 ubuntu-20-agent-15 dockerd[14530]: time="2024-09-24T18:25:39.757049075Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=059b1fe13fbacbee traceID=52bb582c9fe4acddde492363f34b32a9
	Sep 24 18:25:39 ubuntu-20-agent-15 dockerd[14530]: time="2024-09-24T18:25:39.759586319Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=059b1fe13fbacbee traceID=52bb582c9fe4acddde492363f34b32a9
	Sep 24 18:27:11 ubuntu-20-agent-15 cri-dockerd[14859]: time="2024-09-24T18:27:11Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 24 18:27:13 ubuntu-20-agent-15 dockerd[14530]: time="2024-09-24T18:27:13.407032660Z" level=info msg="ignoring event" container=cd8250e1ed846c059a226893798401ba3eab38e514749f376bea5ec1c4c28b7b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:28:29 ubuntu-20-agent-15 dockerd[14530]: time="2024-09-24T18:28:29.753309428Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=9a363b29da0d0167 traceID=b154fc16e1f05ba6cc332f1d3f4e8007
	Sep 24 18:28:29 ubuntu-20-agent-15 dockerd[14530]: time="2024-09-24T18:28:29.755586901Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=9a363b29da0d0167 traceID=b154fc16e1f05ba6cc332f1d3f4e8007
	Sep 24 18:31:03 ubuntu-20-agent-15 cri-dockerd[14859]: time="2024-09-24T18:31:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3a452efd819286b1c3a2cc7c627af39051502d48c46840a93426bcba405816e7/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-central1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 24 18:31:04 ubuntu-20-agent-15 dockerd[14530]: time="2024-09-24T18:31:04.028284427Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=542ea6c70e19e313 traceID=60cee9bc54f49013a5962a995abcd95e
	Sep 24 18:31:04 ubuntu-20-agent-15 dockerd[14530]: time="2024-09-24T18:31:04.030372500Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=542ea6c70e19e313 traceID=60cee9bc54f49013a5962a995abcd95e
	Sep 24 18:31:18 ubuntu-20-agent-15 dockerd[14530]: time="2024-09-24T18:31:18.754018918Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=f2b52070319b8dbb traceID=338e8ac7fed44e67bbabc5f43cd57a6f
	Sep 24 18:31:18 ubuntu-20-agent-15 dockerd[14530]: time="2024-09-24T18:31:18.756374517Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=f2b52070319b8dbb traceID=338e8ac7fed44e67bbabc5f43cd57a6f
	Sep 24 18:31:47 ubuntu-20-agent-15 dockerd[14530]: time="2024-09-24T18:31:47.759499369Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=82bb9319296f0ab9 traceID=1706fd10eae44aa5dc1d783d79dadcd2
	Sep 24 18:31:47 ubuntu-20-agent-15 dockerd[14530]: time="2024-09-24T18:31:47.761809291Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=82bb9319296f0ab9 traceID=1706fd10eae44aa5dc1d783d79dadcd2
	Sep 24 18:32:03 ubuntu-20-agent-15 dockerd[14530]: time="2024-09-24T18:32:03.590502379Z" level=info msg="ignoring event" container=3a452efd819286b1c3a2cc7c627af39051502d48c46840a93426bcba405816e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:32:03 ubuntu-20-agent-15 dockerd[14530]: time="2024-09-24T18:32:03.874069329Z" level=info msg="ignoring event" container=148a9fa65024f43450554b3a879109523a5def9f7c179959fb0869976d280448 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:32:03 ubuntu-20-agent-15 dockerd[14530]: time="2024-09-24T18:32:03.939482504Z" level=info msg="ignoring event" container=520a476f2f6cd12d2de13fa6b112ecf7ba22de3617679e731894296670fb9ee5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:32:04 ubuntu-20-agent-15 dockerd[14530]: time="2024-09-24T18:32:04.020223572Z" level=info msg="ignoring event" container=e6e2bd36b0ed2306fb73c5d749375ad15e998d81b6f99159dc896985a720a291 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 24 18:32:04 ubuntu-20-agent-15 dockerd[14530]: time="2024-09-24T18:32:04.103092587Z" level=info msg="ignoring event" container=b6884953da123b4edc7fbea500099ee49b4ae3a3e10c13dc0370f2d7da0e4d60 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	cd8250e1ed846       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            4 minutes ago       Exited              gadget                                   6                   b0faf81eb39f0       gadget-v55cn
	b81d9960f8640       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   23cdc165cf844       gcp-auth-89d5ffd79-txtjg
	4f313abb54f39       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          10 minutes ago      Running             csi-snapshotter                          0                   7015e55717670       csi-hostpathplugin-56tsc
	4b97ac04a8b7a       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          10 minutes ago      Running             csi-provisioner                          0                   7015e55717670       csi-hostpathplugin-56tsc
	77f44196bd46d       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            10 minutes ago      Running             liveness-probe                           0                   7015e55717670       csi-hostpathplugin-56tsc
	996dac4bc8359       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           10 minutes ago      Running             hostpath                                 0                   7015e55717670       csi-hostpathplugin-56tsc
	c6fd66bcd917d       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                10 minutes ago      Running             node-driver-registrar                    0                   7015e55717670       csi-hostpathplugin-56tsc
	62417de194f07       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   10 minutes ago      Running             csi-external-health-monitor-controller   0                   7015e55717670       csi-hostpathplugin-56tsc
	dd09c0fa1e58e       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             10 minutes ago      Running             csi-attacher                             0                   301afa0eef0c7       csi-hostpath-attacher-0
	1ddf37617ee1c       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              10 minutes ago      Running             csi-resizer                              0                   5c8e243496110       csi-hostpath-resizer-0
	49cd3b3663dac       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   9f8fc8d0b5166       snapshot-controller-56fcc65765-f5v97
	2d94bab951afd       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   5edaf7739be82       snapshot-controller-56fcc65765-lbrrs
	a5507110606c7       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        10 minutes ago      Running             metrics-server                           0                   11fd5c0da3461       metrics-server-84c5f94fbc-44qj8
	8292f15627400       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        10 minutes ago      Running             yakd                                     0                   caf26e089aa44       yakd-dashboard-67d98fc6b-jd9gj
	b2cf344a6bff3       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       11 minutes ago      Running             local-path-provisioner                   0                   8c694a783f472       local-path-provisioner-86d989889c-pvd64
	871595dca4254       gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e                               11 minutes ago      Running             cloud-spanner-emulator                   0                   5328eaa2bf76f       cloud-spanner-emulator-5b584cc74-fj6vh
	66411eba6a9bb       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     11 minutes ago      Running             nvidia-device-plugin-ctr                 0                   165d71a33e165       nvidia-device-plugin-daemonset-w5fqh
	53e454cf0780c       6e38f40d628db                                                                                                                                11 minutes ago      Running             storage-provisioner                      0                   100cb760a0d58       storage-provisioner
	8f12e717316f1       c69fa2e9cbf5f                                                                                                                                11 minutes ago      Running             coredns                                  0                   8fd02998c3599       coredns-7c65d6cfc9-qf5sh
	bb786c2af329d       60c005f310ff3                                                                                                                                11 minutes ago      Running             kube-proxy                               0                   33c0825b2b32f       kube-proxy-b5gd8
	dbdab20d402af       6bab7719df100                                                                                                                                11 minutes ago      Running             kube-apiserver                           0                   76da13d537d31       kube-apiserver-ubuntu-20-agent-15
	3a238c786fe4f       9aa1fad941575                                                                                                                                11 minutes ago      Running             kube-scheduler                           0                   228509f8a3b7c       kube-scheduler-ubuntu-20-agent-15
	514916276db87       175ffd71cce3d                                                                                                                                11 minutes ago      Running             kube-controller-manager                  0                   3648489d2ef64       kube-controller-manager-ubuntu-20-agent-15
	d4bc788e23fac       2e96e5913fc06                                                                                                                                11 minutes ago      Running             etcd                                     0                   c41a406ef40e7       etcd-ubuntu-20-agent-15
	
	
	==> coredns [8f12e717316f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	[INFO] Reloading complete
	[INFO] 10.244.0.24:39676 - 682 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000326932s
	[INFO] 10.244.0.24:55069 - 11558 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000350859s
	[INFO] 10.244.0.24:57553 - 45702 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000111394s
	[INFO] 10.244.0.24:47403 - 31304 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000119375s
	[INFO] 10.244.0.24:36726 - 23915 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00008555s
	[INFO] 10.244.0.24:48423 - 21843 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00013484s
	[INFO] 10.244.0.24:38094 - 61263 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005270786s
	[INFO] 10.244.0.24:58134 - 11273 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005290685s
	[INFO] 10.244.0.24:53151 - 27347 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003884322s
	[INFO] 10.244.0.24:46295 - 26668 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003983208s
	[INFO] 10.244.0.24:38494 - 27587 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003148202s
	[INFO] 10.244.0.24:36030 - 28852 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004507s
	[INFO] 10.244.0.24:56795 - 53038 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001575991s
	[INFO] 10.244.0.24:34529 - 10505 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001812393s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-15
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-15
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ab8e06d5efb8aef1f7ea9881c3e41593ddc7876e
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T18_20_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-15
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-15"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 18:20:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-15
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 18:31:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 18:27:50 +0000   Tue, 24 Sep 2024 18:20:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 18:27:50 +0000   Tue, 24 Sep 2024 18:20:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 18:27:50 +0000   Tue, 24 Sep 2024 18:20:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 18:27:50 +0000   Tue, 24 Sep 2024 18:20:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.128.15.240
	  Hostname:    ubuntu-20-agent-15
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                b37db8a4-1476-dab1-7f0f-0d5cfb4ed197
	  Boot ID:                    7f2503e2-3306-475a-bb6d-e45e554d8bdf
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  default                     cloud-spanner-emulator-5b584cc74-fj6vh        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-v55cn                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-txtjg                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-qf5sh                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 csi-hostpath-attacher-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-56tsc                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ubuntu-20-agent-15                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kube-apiserver-ubuntu-20-agent-15             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-15    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-b5gd8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ubuntu-20-agent-15             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-44qj8               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-w5fqh          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-f5v97          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-lbrrs          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-pvd64       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-jd9gj                0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 11m   kube-proxy       
	  Normal   Starting                 11m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  11m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m   kubelet          Node ubuntu-20-agent-15 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m   kubelet          Node ubuntu-20-agent-15 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m   kubelet          Node ubuntu-20-agent-15 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m   node-controller  Node ubuntu-20-agent-15 event: Registered Node ubuntu-20-agent-15 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a 81 b8 2c e0 28 08 06
	[  +0.042168] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 81 e3 5b e1 2c 08 06
	[  +2.536781] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5a 23 66 a1 44 0f 08 06
	[  +1.405655] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a 88 62 cc 9c db 08 06
	[  +1.929414] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 9c df 26 af 36 08 06
	[  +4.827088] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa 88 76 25 c6 ce 08 06
	[  +0.447962] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4e 17 f1 8f 25 dc 08 06
	[  +0.264909] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 5a 38 9d 90 52 ab 08 06
	[  +1.999083] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 01 cc 66 82 f7 08 06
	[Sep24 18:22] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 06 8b 1a 46 11 ad 08 06
	[  +0.027418] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff f6 3a 53 d6 3d a4 08 06
	[ +10.012296] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 26 4c 90 a6 ad 2a 08 06
	[  +0.000447] IPv4: martian source 10.244.0.24 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a fc ee b0 39 ce 08 06
	
	
	==> etcd [d4bc788e23fa] <==
	{"level":"info","ts":"2024-09-24T18:20:38.242194Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3112ce273fbe8262","local-member-id":"13f0e7e2a3d8cc98","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T18:20:38.242281Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T18:20:38.242305Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T18:20:38.242551Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T18:20:38.242853Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T18:20:38.243363Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-24T18:20:38.243591Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.128.15.240:2379"}
	{"level":"info","ts":"2024-09-24T18:20:54.748802Z","caller":"traceutil/trace.go:171","msg":"trace[1240798094] linearizableReadLoop","detail":"{readStateIndex:838; appliedIndex:837; }","duration":"126.337906ms","start":"2024-09-24T18:20:54.622442Z","end":"2024-09-24T18:20:54.748780Z","steps":["trace[1240798094] 'read index received'  (duration: 61.691952ms)","trace[1240798094] 'applied index is now lower than readState.Index'  (duration: 64.645216ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-24T18:20:54.748857Z","caller":"traceutil/trace.go:171","msg":"trace[1191308502] transaction","detail":"{read_only:false; response_revision:822; number_of_response:1; }","duration":"126.5371ms","start":"2024-09-24T18:20:54.622298Z","end":"2024-09-24T18:20:54.748836Z","steps":["trace[1191308502] 'process raft request'  (duration: 61.8347ms)","trace[1191308502] 'compare'  (duration: 64.57029ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-24T18:20:54.749039Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.550369ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ubuntu-20-agent-15\" ","response":"range_response_count:1 size:4463"}
	{"level":"info","ts":"2024-09-24T18:20:54.749096Z","caller":"traceutil/trace.go:171","msg":"trace[1713588761] range","detail":"{range_begin:/registry/minions/ubuntu-20-agent-15; range_end:; response_count:1; response_revision:822; }","duration":"126.647979ms","start":"2024-09-24T18:20:54.622438Z","end":"2024-09-24T18:20:54.749086Z","steps":["trace[1713588761] 'agreement among raft nodes before linearized reading'  (duration: 126.458554ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T18:20:54.764671Z","caller":"traceutil/trace.go:171","msg":"trace[153378049] transaction","detail":"{read_only:false; response_revision:824; number_of_response:1; }","duration":"138.752998ms","start":"2024-09-24T18:20:54.625903Z","end":"2024-09-24T18:20:54.764656Z","steps":["trace[153378049] 'process raft request'  (duration: 138.596929ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T18:20:54.764695Z","caller":"traceutil/trace.go:171","msg":"trace[1064682351] transaction","detail":"{read_only:false; response_revision:823; number_of_response:1; }","duration":"142.157382ms","start":"2024-09-24T18:20:54.622521Z","end":"2024-09-24T18:20:54.764678Z","steps":["trace[1064682351] 'process raft request'  (duration: 141.911716ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T18:20:54.764735Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.016924ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/gcp-auth/gcp-auth\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-24T18:20:54.764777Z","caller":"traceutil/trace.go:171","msg":"trace[221397326] range","detail":"{range_begin:/registry/services/specs/gcp-auth/gcp-auth; range_end:; response_count:0; response_revision:824; }","duration":"142.076904ms","start":"2024-09-24T18:20:54.622687Z","end":"2024-09-24T18:20:54.764764Z","steps":["trace[221397326] 'agreement among raft nodes before linearized reading'  (duration: 141.99813ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-24T18:20:54.985885Z","caller":"traceutil/trace.go:171","msg":"trace[172982636] linearizableReadLoop","detail":"{readStateIndex:842; appliedIndex:841; }","duration":"139.458696ms","start":"2024-09-24T18:20:54.846409Z","end":"2024-09-24T18:20:54.985868Z","steps":["trace[172982636] 'read index received'  (duration: 54.322356ms)","trace[172982636] 'applied index is now lower than readState.Index'  (duration: 85.135593ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-24T18:20:54.985908Z","caller":"traceutil/trace.go:171","msg":"trace[1400999094] transaction","detail":"{read_only:false; response_revision:826; number_of_response:1; }","duration":"140.942695ms","start":"2024-09-24T18:20:54.844948Z","end":"2024-09-24T18:20:54.985891Z","steps":["trace[1400999094] 'process raft request'  (duration: 55.774838ms)","trace[1400999094] 'compare'  (duration: 85.0497ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-24T18:20:54.986013Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.59137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-ubuntu-20-agent-15\" ","response":"range_response_count:1 size:7681"}
	{"level":"info","ts":"2024-09-24T18:20:54.986049Z","caller":"traceutil/trace.go:171","msg":"trace[1377110393] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-ubuntu-20-agent-15; range_end:; response_count:1; response_revision:826; }","duration":"139.641846ms","start":"2024-09-24T18:20:54.846399Z","end":"2024-09-24T18:20:54.986041Z","steps":["trace[1377110393] 'agreement among raft nodes before linearized reading'  (duration: 139.531787ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-24T18:20:55.277466Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.903163ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14742694068942142917 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/minikube-gcp-auth-certs\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/minikube-gcp-auth-certs\" value_size:961 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-24T18:20:55.277596Z","caller":"traceutil/trace.go:171","msg":"trace[80783024] transaction","detail":"{read_only:false; response_revision:830; number_of_response:1; }","duration":"202.195707ms","start":"2024-09-24T18:20:55.075389Z","end":"2024-09-24T18:20:55.277584Z","steps":["trace[80783024] 'process raft request'  (duration: 62.820444ms)","trace[80783024] 'compare'  (duration: 138.806191ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-24T18:20:55.431355Z","caller":"traceutil/trace.go:171","msg":"trace[842876556] transaction","detail":"{read_only:false; response_revision:832; number_of_response:1; }","duration":"143.111383ms","start":"2024-09-24T18:20:55.288218Z","end":"2024-09-24T18:20:55.431329Z","steps":["trace[842876556] 'process raft request'  (duration: 91.104029ms)","trace[842876556] 'compare'  (duration: 51.79754ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-24T18:30:38.257716Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1692}
	{"level":"info","ts":"2024-09-24T18:30:38.282890Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1692,"took":"24.668188ms","hash":342770390,"current-db-size-bytes":8155136,"current-db-size":"8.2 MB","current-db-size-in-use-bytes":4263936,"current-db-size-in-use":"4.3 MB"}
	{"level":"info","ts":"2024-09-24T18:30:38.282933Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":342770390,"revision":1692,"compact-revision":-1}
	
	
	==> gcp-auth [b81d9960f864] <==
	2024/09/24 18:22:10 GCP Auth Webhook started!
	2024/09/24 18:22:27 Ready to marshal response ...
	2024/09/24 18:22:27 Ready to write response ...
	2024/09/24 18:22:27 Ready to marshal response ...
	2024/09/24 18:22:27 Ready to write response ...
	2024/09/24 18:22:50 Ready to marshal response ...
	2024/09/24 18:22:50 Ready to write response ...
	2024/09/24 18:22:51 Ready to marshal response ...
	2024/09/24 18:22:51 Ready to write response ...
	2024/09/24 18:22:51 Ready to marshal response ...
	2024/09/24 18:22:51 Ready to write response ...
	2024/09/24 18:31:03 Ready to marshal response ...
	2024/09/24 18:31:03 Ready to write response ...
	
	
	==> kernel <==
	 18:32:04 up 14 min,  0 users,  load average: 0.17, 0.26, 0.27
	Linux ubuntu-20-agent-15 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [dbdab20d402a] <==
	W0924 18:21:27.668451       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.131.217:443: connect: connection refused
	W0924 18:21:28.756128       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.99.131.217:443: connect: connection refused
	W0924 18:21:36.517822       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.249.78:443: connect: connection refused
	E0924 18:21:36.517856       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.249.78:443: connect: connection refused" logger="UnhandledError"
	W0924 18:21:58.524008       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.249.78:443: connect: connection refused
	E0924 18:21:58.524042       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.249.78:443: connect: connection refused" logger="UnhandledError"
	W0924 18:21:58.533528       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.249.78:443: connect: connection refused
	E0924 18:21:58.533570       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.249.78:443: connect: connection refused" logger="UnhandledError"
	I0924 18:22:27.838195       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0924 18:22:27.853817       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0924 18:22:40.290286       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0924 18:22:40.314789       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0924 18:22:40.404576       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0924 18:22:40.415137       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0924 18:22:40.450817       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0924 18:22:40.579363       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0924 18:22:40.616351       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0924 18:22:40.655402       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0924 18:22:41.337773       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0924 18:22:41.445908       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0924 18:22:41.451781       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0924 18:22:41.573380       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0924 18:22:41.573390       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0924 18:22:41.655798       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0924 18:22:41.815822       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	
	
	==> kube-controller-manager [514916276db8] <==
	W0924 18:30:47.737966       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:30:47.738007       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:30:48.748144       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:30:48.748191       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:30:52.491118       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:30:52.491164       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:31:02.776735       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:31:02.776785       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:31:05.286601       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:31:05.286644       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:31:12.954841       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:31:12.954883       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:31:24.516450       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:31:24.516493       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:31:31.959888       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:31:31.959930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:31:39.852929       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:31:39.852976       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:31:39.969876       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:31:39.969920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:31:40.245249       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:31:40.245289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0924 18:31:48.326042       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0924 18:31:48.326084       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0924 18:32:03.840070       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="10.133µs"
	
	
	==> kube-proxy [bb786c2af329] <==
	I0924 18:20:47.813230       1 server_linux.go:66] "Using iptables proxy"
	I0924 18:20:47.975566       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.128.15.240"]
	E0924 18:20:47.975639       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 18:20:48.145807       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0924 18:20:48.145890       1 server_linux.go:169] "Using iptables Proxier"
	I0924 18:20:48.163101       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 18:20:48.163452       1 server.go:483] "Version info" version="v1.31.1"
	I0924 18:20:48.163488       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 18:20:48.165744       1 config.go:199] "Starting service config controller"
	I0924 18:20:48.165757       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 18:20:48.165790       1 config.go:105] "Starting endpoint slice config controller"
	I0924 18:20:48.165796       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 18:20:48.166195       1 config.go:328] "Starting node config controller"
	I0924 18:20:48.166201       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 18:20:48.266178       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 18:20:48.266239       1 shared_informer.go:320] Caches are synced for service config
	I0924 18:20:48.266585       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3a238c786fe4] <==
	W0924 18:20:39.100610       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0924 18:20:39.100735       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:39.100761       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0924 18:20:39.100786       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:39.100828       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0924 18:20:39.100843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0924 18:20:39.100852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0924 18:20:39.100865       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:40.019966       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0924 18:20:40.020004       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:40.092487       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0924 18:20:40.092539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:40.107184       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0924 18:20:40.107231       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:40.195708       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0924 18:20:40.195754       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:40.196617       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0924 18:20:40.196653       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:40.226138       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0924 18:20:40.226186       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:40.265417       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0924 18:20:40.265464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 18:20:40.471424       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 18:20:40.471471       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0924 18:20:43.596696       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Sat 2024-08-24 19:10:57 UTC, end at Tue 2024-09-24 18:32:04 UTC. --
	Sep 24 18:31:47 ubuntu-20-agent-15 kubelet[15733]: E0924 18:31:47.763682   15733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ErrImagePull: \"Error response from daemon: Head \\\"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\\\": unauthorized: authentication failed\"" pod="default/registry-test" podUID="67dacde2-210e-4887-9238-d4560cc8083b"
	Sep 24 18:31:52 ubuntu-20-agent-15 kubelet[15733]: E0924 18:31:52.726214   15733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="f1c9c939-e3c4-4691-9a6c-9d06a60701ae"
	Sep 24 18:31:58 ubuntu-20-agent-15 kubelet[15733]: E0924 18:31:58.726338   15733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="67dacde2-210e-4887-9238-d4560cc8083b"
	Sep 24 18:32:02 ubuntu-20-agent-15 kubelet[15733]: I0924 18:32:02.724303   15733 scope.go:117] "RemoveContainer" containerID="cd8250e1ed846c059a226893798401ba3eab38e514749f376bea5ec1c4c28b7b"
	Sep 24 18:32:02 ubuntu-20-agent-15 kubelet[15733]: E0924 18:32:02.724568   15733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-v55cn_gadget(8f41d0d4-f954-4d9d-b0e9-f9947168af65)\"" pod="gadget/gadget-v55cn" podUID="8f41d0d4-f954-4d9d-b0e9-f9947168af65"
	Sep 24 18:32:03 ubuntu-20-agent-15 kubelet[15733]: I0924 18:32:03.661687   15733 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb58p\" (UniqueName: \"kubernetes.io/projected/67dacde2-210e-4887-9238-d4560cc8083b-kube-api-access-bb58p\") pod \"67dacde2-210e-4887-9238-d4560cc8083b\" (UID: \"67dacde2-210e-4887-9238-d4560cc8083b\") "
	Sep 24 18:32:03 ubuntu-20-agent-15 kubelet[15733]: I0924 18:32:03.661759   15733 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/67dacde2-210e-4887-9238-d4560cc8083b-gcp-creds\") pod \"67dacde2-210e-4887-9238-d4560cc8083b\" (UID: \"67dacde2-210e-4887-9238-d4560cc8083b\") "
	Sep 24 18:32:03 ubuntu-20-agent-15 kubelet[15733]: I0924 18:32:03.661889   15733 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67dacde2-210e-4887-9238-d4560cc8083b-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "67dacde2-210e-4887-9238-d4560cc8083b" (UID: "67dacde2-210e-4887-9238-d4560cc8083b"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 24 18:32:03 ubuntu-20-agent-15 kubelet[15733]: I0924 18:32:03.663927   15733 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67dacde2-210e-4887-9238-d4560cc8083b-kube-api-access-bb58p" (OuterVolumeSpecName: "kube-api-access-bb58p") pod "67dacde2-210e-4887-9238-d4560cc8083b" (UID: "67dacde2-210e-4887-9238-d4560cc8083b"). InnerVolumeSpecName "kube-api-access-bb58p". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 24 18:32:03 ubuntu-20-agent-15 kubelet[15733]: I0924 18:32:03.762486   15733 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/67dacde2-210e-4887-9238-d4560cc8083b-gcp-creds\") on node \"ubuntu-20-agent-15\" DevicePath \"\""
	Sep 24 18:32:03 ubuntu-20-agent-15 kubelet[15733]: I0924 18:32:03.762524   15733 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bb58p\" (UniqueName: \"kubernetes.io/projected/67dacde2-210e-4887-9238-d4560cc8083b-kube-api-access-bb58p\") on node \"ubuntu-20-agent-15\" DevicePath \"\""
	Sep 24 18:32:04 ubuntu-20-agent-15 kubelet[15733]: I0924 18:32:04.165772   15733 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8nq4\" (UniqueName: \"kubernetes.io/projected/12f00669-2ddf-46ee-94c2-081f0f063e2f-kube-api-access-q8nq4\") pod \"12f00669-2ddf-46ee-94c2-081f0f063e2f\" (UID: \"12f00669-2ddf-46ee-94c2-081f0f063e2f\") "
	Sep 24 18:32:04 ubuntu-20-agent-15 kubelet[15733]: I0924 18:32:04.167886   15733 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12f00669-2ddf-46ee-94c2-081f0f063e2f-kube-api-access-q8nq4" (OuterVolumeSpecName: "kube-api-access-q8nq4") pod "12f00669-2ddf-46ee-94c2-081f0f063e2f" (UID: "12f00669-2ddf-46ee-94c2-081f0f063e2f"). InnerVolumeSpecName "kube-api-access-q8nq4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 24 18:32:04 ubuntu-20-agent-15 kubelet[15733]: I0924 18:32:04.258436   15733 scope.go:117] "RemoveContainer" containerID="148a9fa65024f43450554b3a879109523a5def9f7c179959fb0869976d280448"
	Sep 24 18:32:04 ubuntu-20-agent-15 kubelet[15733]: I0924 18:32:04.266064   15733 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldwxw\" (UniqueName: \"kubernetes.io/projected/9f8ff49d-1599-4142-892a-bb601f73001a-kube-api-access-ldwxw\") pod \"9f8ff49d-1599-4142-892a-bb601f73001a\" (UID: \"9f8ff49d-1599-4142-892a-bb601f73001a\") "
	Sep 24 18:32:04 ubuntu-20-agent-15 kubelet[15733]: I0924 18:32:04.266158   15733 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-q8nq4\" (UniqueName: \"kubernetes.io/projected/12f00669-2ddf-46ee-94c2-081f0f063e2f-kube-api-access-q8nq4\") on node \"ubuntu-20-agent-15\" DevicePath \"\""
	Sep 24 18:32:04 ubuntu-20-agent-15 kubelet[15733]: I0924 18:32:04.268552   15733 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f8ff49d-1599-4142-892a-bb601f73001a-kube-api-access-ldwxw" (OuterVolumeSpecName: "kube-api-access-ldwxw") pod "9f8ff49d-1599-4142-892a-bb601f73001a" (UID: "9f8ff49d-1599-4142-892a-bb601f73001a"). InnerVolumeSpecName "kube-api-access-ldwxw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 24 18:32:04 ubuntu-20-agent-15 kubelet[15733]: I0924 18:32:04.283572   15733 scope.go:117] "RemoveContainer" containerID="148a9fa65024f43450554b3a879109523a5def9f7c179959fb0869976d280448"
	Sep 24 18:32:04 ubuntu-20-agent-15 kubelet[15733]: E0924 18:32:04.284561   15733 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 148a9fa65024f43450554b3a879109523a5def9f7c179959fb0869976d280448" containerID="148a9fa65024f43450554b3a879109523a5def9f7c179959fb0869976d280448"
	Sep 24 18:32:04 ubuntu-20-agent-15 kubelet[15733]: I0924 18:32:04.284623   15733 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"148a9fa65024f43450554b3a879109523a5def9f7c179959fb0869976d280448"} err="failed to get container status \"148a9fa65024f43450554b3a879109523a5def9f7c179959fb0869976d280448\": rpc error: code = Unknown desc = Error response from daemon: No such container: 148a9fa65024f43450554b3a879109523a5def9f7c179959fb0869976d280448"
	Sep 24 18:32:04 ubuntu-20-agent-15 kubelet[15733]: I0924 18:32:04.284654   15733 scope.go:117] "RemoveContainer" containerID="520a476f2f6cd12d2de13fa6b112ecf7ba22de3617679e731894296670fb9ee5"
	Sep 24 18:32:04 ubuntu-20-agent-15 kubelet[15733]: I0924 18:32:04.311703   15733 scope.go:117] "RemoveContainer" containerID="520a476f2f6cd12d2de13fa6b112ecf7ba22de3617679e731894296670fb9ee5"
	Sep 24 18:32:04 ubuntu-20-agent-15 kubelet[15733]: E0924 18:32:04.312608   15733 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 520a476f2f6cd12d2de13fa6b112ecf7ba22de3617679e731894296670fb9ee5" containerID="520a476f2f6cd12d2de13fa6b112ecf7ba22de3617679e731894296670fb9ee5"
	Sep 24 18:32:04 ubuntu-20-agent-15 kubelet[15733]: I0924 18:32:04.312650   15733 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"520a476f2f6cd12d2de13fa6b112ecf7ba22de3617679e731894296670fb9ee5"} err="failed to get container status \"520a476f2f6cd12d2de13fa6b112ecf7ba22de3617679e731894296670fb9ee5\": rpc error: code = Unknown desc = Error response from daemon: No such container: 520a476f2f6cd12d2de13fa6b112ecf7ba22de3617679e731894296670fb9ee5"
	Sep 24 18:32:04 ubuntu-20-agent-15 kubelet[15733]: I0924 18:32:04.367377   15733 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ldwxw\" (UniqueName: \"kubernetes.io/projected/9f8ff49d-1599-4142-892a-bb601f73001a-kube-api-access-ldwxw\") on node \"ubuntu-20-agent-15\" DevicePath \"\""
	
	
	==> storage-provisioner [53e454cf0780] <==
	I0924 18:20:49.520563       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0924 18:20:49.537290       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0924 18:20:49.537369       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0924 18:20:49.547537       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0924 18:20:49.547730       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-15_b4dfba00-f866-463b-bdb1-45f896a06d8a!
	I0924 18:20:49.548010       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e1dc2463-3c42-4085-a809-c5789f23668d", APIVersion:"v1", ResourceVersion:"595", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-15_b4dfba00-f866-463b-bdb1-45f896a06d8a became leader
	I0924 18:20:49.647940       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-15_b4dfba00-f866-463b-bdb1-45f896a06d8a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ubuntu-20-agent-15/10.128.15.240
	Start Time:       Tue, 24 Sep 2024 18:22:51 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qk6pj (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qk6pj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m14s                   default-scheduler  Successfully assigned default/busybox to ubuntu-20-agent-15
	  Normal   Pulling    7m47s (x4 over 9m14s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m47s (x4 over 9m14s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m47s (x4 over 9m14s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m34s (x6 over 9m14s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m14s (x20 over 9m14s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.98s)

                                                
                                    

Test pass (104/166)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 3.55
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 1.12
15 TestDownloadOnly/v1.31.1/binaries 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.54
22 TestOffline 42.73
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 102.54
29 TestAddons/serial/Volcano 39.17
31 TestAddons/serial/GCPAuth/Namespaces 0.12
35 TestAddons/parallel/InspektorGadget 10.46
36 TestAddons/parallel/MetricsServer 5.39
38 TestAddons/parallel/CSI 54.92
39 TestAddons/parallel/Headlamp 15.91
40 TestAddons/parallel/CloudSpanner 5.26
42 TestAddons/parallel/NvidiaDevicePlugin 6.23
43 TestAddons/parallel/Yakd 10.41
44 TestAddons/StoppedEnableDisable 10.72
46 TestCertExpiration 230.02
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 31.22
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 32.58
61 TestFunctional/serial/KubeContext 0.04
62 TestFunctional/serial/KubectlGetPods 0.07
64 TestFunctional/serial/MinikubeKubectlCmd 0.1
65 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
66 TestFunctional/serial/ExtraConfig 37.83
67 TestFunctional/serial/ComponentHealth 0.07
68 TestFunctional/serial/LogsCmd 0.86
69 TestFunctional/serial/LogsFileCmd 0.88
70 TestFunctional/serial/InvalidService 4.45
72 TestFunctional/parallel/ConfigCmd 0.27
73 TestFunctional/parallel/DashboardCmd 9.72
74 TestFunctional/parallel/DryRun 0.16
75 TestFunctional/parallel/InternationalLanguage 0.08
76 TestFunctional/parallel/StatusCmd 0.42
79 TestFunctional/parallel/ProfileCmd/profile_not_create 0.21
80 TestFunctional/parallel/ProfileCmd/profile_list 0.19
81 TestFunctional/parallel/ProfileCmd/profile_json_output 0.2
83 TestFunctional/parallel/ServiceCmd/DeployApp 8.14
84 TestFunctional/parallel/ServiceCmd/List 0.33
85 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
86 TestFunctional/parallel/ServiceCmd/HTTPS 0.16
87 TestFunctional/parallel/ServiceCmd/Format 0.15
88 TestFunctional/parallel/ServiceCmd/URL 0.16
89 TestFunctional/parallel/ServiceCmdConnect 8.31
90 TestFunctional/parallel/AddonsCmd 0.11
91 TestFunctional/parallel/PersistentVolumeClaim 22.7
104 TestFunctional/parallel/MySQL 21.44
108 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
109 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 14.39
110 TestFunctional/parallel/UpdateContextCmd/no_clusters 14.64
113 TestFunctional/parallel/NodeLabels 0.06
117 TestFunctional/parallel/Version/short 0.04
118 TestFunctional/parallel/Version/components 0.39
119 TestFunctional/parallel/License 0.19
120 TestFunctional/delete_echo-server_images 0.03
121 TestFunctional/delete_my-image_image 0.02
122 TestFunctional/delete_minikube_cached_images 0.02
127 TestImageBuild/serial/Setup 13.7
128 TestImageBuild/serial/NormalBuild 0.91
129 TestImageBuild/serial/BuildWithBuildArg 0.65
130 TestImageBuild/serial/BuildWithDockerIgnore 0.38
131 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.38
135 TestJSONOutput/start/Command 32.14
136 TestJSONOutput/start/Audit 0
138 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
139 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
141 TestJSONOutput/pause/Command 0.51
142 TestJSONOutput/pause/Audit 0
144 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
145 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
147 TestJSONOutput/unpause/Command 0.41
148 TestJSONOutput/unpause/Audit 0
150 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
151 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
153 TestJSONOutput/stop/Command 5.28
154 TestJSONOutput/stop/Audit 0
156 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
157 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
158 TestErrorJSONOutput 0.2
163 TestMainNoArgs 0.05
164 TestMinikubeProfile 33.41
172 TestPause/serial/Start 27.21
173 TestPause/serial/SecondStartNoReconfiguration 29.59
174 TestPause/serial/Pause 0.51
175 TestPause/serial/VerifyStatus 0.13
176 TestPause/serial/Unpause 0.43
177 TestPause/serial/PauseAgain 0.55
178 TestPause/serial/DeletePaused 1.83
179 TestPause/serial/VerifyDeletedResources 0.06
193 TestRunningBinaryUpgrade 67.8
195 TestStoppedBinaryUpgrade/Setup 0.41
196 TestStoppedBinaryUpgrade/Upgrade 48.76
197 TestStoppedBinaryUpgrade/MinikubeLogs 0.81
198 TestKubernetesUpgrade 305.52
x
+
TestDownloadOnly/v1.20.0/json-events (3.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (3.544873276s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (3.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (59.079438ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 24 Sep 24 18:19 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 18:19:40
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 18:19:40.209704   10366 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:19:40.210005   10366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:19:40.210022   10366 out.go:358] Setting ErrFile to fd 2...
	I0924 18:19:40.210026   10366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:19:40.210300   10366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3578/.minikube/bin
	W0924 18:19:40.210470   10366 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19700-3578/.minikube/config/config.json: open /home/jenkins/minikube-integration/19700-3578/.minikube/config/config.json: no such file or directory
	I0924 18:19:40.211158   10366 out.go:352] Setting JSON to true
	I0924 18:19:40.212073   10366 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":131,"bootTime":1727201849,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 18:19:40.212181   10366 start.go:139] virtualization: kvm guest
	I0924 18:19:40.214839   10366 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0924 18:19:40.214974   10366 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19700-3578/.minikube/cache/preloaded-tarball: no such file or directory
	I0924 18:19:40.215005   10366 notify.go:220] Checking for updates...
	I0924 18:19:40.216699   10366 out.go:169] MINIKUBE_LOCATION=19700
	I0924 18:19:40.218386   10366 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 18:19:40.219959   10366 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19700-3578/kubeconfig
	I0924 18:19:40.221513   10366 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3578/.minikube
	I0924 18:19:40.223207   10366 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (1.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (1.123639803s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (1.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
--- PASS: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (55.481562ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 24 Sep 24 18:19 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.34.0 | 24 Sep 24 18:19 UTC | 24 Sep 24 18:19 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 24 Sep 24 18:19 UTC | 24 Sep 24 18:19 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 24 Sep 24 18:19 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 18:19:44
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 18:19:44.051907   10520 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:19:44.052008   10520 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:19:44.052022   10520 out.go:358] Setting ErrFile to fd 2...
	I0924 18:19:44.052032   10520 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:19:44.052236   10520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3578/.minikube/bin
	I0924 18:19:44.052802   10520 out.go:352] Setting JSON to true
	I0924 18:19:44.053635   10520 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":135,"bootTime":1727201849,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 18:19:44.053722   10520 start.go:139] virtualization: kvm guest
	I0924 18:19:44.056040   10520 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0924 18:19:44.056139   10520 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19700-3578/.minikube/cache/preloaded-tarball: no such file or directory
	I0924 18:19:44.056163   10520 notify.go:220] Checking for updates...
	I0924 18:19:44.057471   10520 out.go:169] MINIKUBE_LOCATION=19700
	I0924 18:19:44.058869   10520 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 18:19:44.060625   10520 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19700-3578/kubeconfig
	I0924 18:19:44.062130   10520 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3578/.minikube
	I0924 18:19:44.063632   10520 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
I0924 18:19:45.670673   10354 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:44331 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (42.73s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (41.063487065s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.664173648s)
--- PASS: TestOffline (42.73s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (46.430173ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (46.365129ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (102.54s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm: (1m42.541862618s)
--- PASS: TestAddons/Setup (102.54s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.17s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:843: volcano-admission stabilized in 10.356512ms
addons_test.go:835: volcano-scheduler stabilized in 10.39238ms
addons_test.go:851: volcano-controller stabilized in 10.412066ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-mt9kd" [43de7c2f-5670-4ab5-876e-2dec6ddb031f] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003851833s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-77552" [35e3f2a4-b659-454f-bea5-34f2a2f59624] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003945222s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-zb6sx" [fd7fd6f8-4675-4987-b486-e450d029e5a6] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.002659499s
addons_test.go:870: (dbg) Run:  kubectl --context minikube delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context minikube create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [2694bc32-6c8e-410a-93a1-e1f25a545255] Pending
helpers_test.go:344: "test-job-nginx-0" [2694bc32-6c8e-410a-93a1-e1f25a545255] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [2694bc32-6c8e-410a-93a1-e1f25a545255] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.004275459s
addons_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.837258557s)
--- PASS: TestAddons/serial/Volcano (39.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.46s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-v55cn" [8f41d0d4-f954-4d9d-b0e9-f9947168af65] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003256482s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube: (5.454476299s)
--- PASS: TestAddons/parallel/InspektorGadget (10.46s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.39s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.080859ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-44qj8" [2ed2f7f7-673f-4480-9ab4-24f463eda2db] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003961059s
addons_test.go:413: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.39s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.92s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
I0924 18:32:21.155022   10354 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0924 18:32:21.159239   10354 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0924 18:32:21.159264   10354 kapi.go:107] duration metric: took 4.252048ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 4.261143ms
addons_test.go:508: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [2de37fcd-a0e7-4ba3-a3e5-71ef965424ca] Pending
helpers_test.go:344: "task-pv-pod" [2de37fcd-a0e7-4ba3-a3e5-71ef965424ca] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [2de37fcd-a0e7-4ba3-a3e5-71ef965424ca] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003586532s
addons_test.go:528: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context minikube delete pod task-pv-pod: (1.315520245s)
addons_test.go:544: (dbg) Run:  kubectl --context minikube delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [afef104b-20bb-4ca6-996d-2e0b178fe9ed] Pending
helpers_test.go:344: "task-pv-pod-restore" [afef104b-20bb-4ca6-996d-2e0b178fe9ed] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [afef104b-20bb-4ca6-996d-2e0b178fe9ed] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003371033s
addons_test.go:570: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context minikube delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context minikube delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.296109192s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (54.92s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-hpvtg" [94c607c3-cdb6-48e1-bd66-9752216033ed] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-hpvtg" [94c607c3-cdb6-48e1-bd66-9752216033ed] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.00380055s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1: (5.39237193s)
--- PASS: TestAddons/parallel/Headlamp (15.91s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-fj6vh" [bc4cc42f-6215-40bd-89a2-6115845035c9] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003511229s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.23s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-w5fqh" [662002ba-ca76-4e7f-97f4-416f0ec02c9e] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003812775s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p minikube
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.23s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-jd9gj" [8857cbc1-8096-40ed-b7c1-62e7978d12b5] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004158004s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.401977633s)
--- PASS: TestAddons/parallel/Yakd (10.41s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.72s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.393897953s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.72s)

                                                
                                    
x
+
TestCertExpiration (230.02s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (15.739051814s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (32.531743947s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.745155411s)
--- PASS: TestCertExpiration (230.02s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19700-3578/.minikube/files/etc/test/nested/copy/10354/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (31.22s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (31.220177182s)
--- PASS: TestFunctional/serial/StartWithProxy (31.22s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.58s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0924 18:38:27.034208   10354 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (32.582544632s)
functional_test.go:663: soft start took 32.583508921s for "minikube" cluster.
I0924 18:38:59.617107   10354 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (32.58s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.83s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.829881975s)
functional_test.go:761: restart took 37.829992556s for "minikube" cluster.
I0924 18:39:37.765444   10354 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (37.83s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.86s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.88s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd494367663/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.88s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.45s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (161.756527ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://10.128.15.240:30523 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context minikube delete -f testdata/invalidsvc.yaml: (1.103172241s)
--- PASS: TestFunctional/serial/InvalidService (4.45s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (43.630873ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (42.303229ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2024/09/24 18:39:53 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 46471: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.72s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (80.905212ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19700
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19700-3578/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3578/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 18:39:54.053995   46864 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:39:54.054103   46864 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:39:54.054114   46864 out.go:358] Setting ErrFile to fd 2...
	I0924 18:39:54.054120   46864 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:39:54.054340   46864 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3578/.minikube/bin
	I0924 18:39:54.054945   46864 out.go:352] Setting JSON to false
	I0924 18:39:54.055905   46864 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1345,"bootTime":1727201849,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 18:39:54.055999   46864 start.go:139] virtualization: kvm guest
	I0924 18:39:54.058496   46864 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0924 18:39:54.060072   46864 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19700-3578/.minikube/cache/preloaded-tarball: no such file or directory
	I0924 18:39:54.060112   46864 notify.go:220] Checking for updates...
	I0924 18:39:54.060133   46864 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 18:39:54.061596   46864 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 18:39:54.063084   46864 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3578/kubeconfig
	I0924 18:39:54.064704   46864 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3578/.minikube
	I0924 18:39:54.066205   46864 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 18:39:54.067730   46864 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 18:39:54.069481   46864 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 18:39:54.069807   46864 exec_runner.go:51] Run: systemctl --version
	I0924 18:39:54.072504   46864 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 18:39:54.085517   46864 out.go:177] * Using the none driver based on existing profile
	I0924 18:39:54.086902   46864 start.go:297] selected driver: none
	I0924 18:39:54.086919   46864 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.240 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:39:54.087033   46864 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 18:39:54.087061   46864 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0924 18:39:54.087336   46864 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0924 18:39:54.089520   46864 out.go:201] 
	W0924 18:39:54.090876   46864 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0924 18:39:54.092135   46864 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (81.677738ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19700
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19700-3578/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3578/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 18:39:54.217689   46892 out.go:345] Setting OutFile to fd 1 ...
	I0924 18:39:54.217783   46892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:39:54.217787   46892 out.go:358] Setting ErrFile to fd 2...
	I0924 18:39:54.217792   46892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 18:39:54.218051   46892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19700-3578/.minikube/bin
	I0924 18:39:54.218576   46892 out.go:352] Setting JSON to false
	I0924 18:39:54.219501   46892 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1345,"bootTime":1727201849,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0924 18:39:54.219603   46892 start.go:139] virtualization: kvm guest
	I0924 18:39:54.221818   46892 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	W0924 18:39:54.223469   46892 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19700-3578/.minikube/cache/preloaded-tarball: no such file or directory
	I0924 18:39:54.223515   46892 notify.go:220] Checking for updates...
	I0924 18:39:54.223573   46892 out.go:177]   - MINIKUBE_LOCATION=19700
	I0924 18:39:54.225147   46892 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 18:39:54.226865   46892 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19700-3578/kubeconfig
	I0924 18:39:54.228467   46892 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3578/.minikube
	I0924 18:39:54.229973   46892 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0924 18:39:54.231471   46892 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 18:39:54.233307   46892 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 18:39:54.233612   46892 exec_runner.go:51] Run: systemctl --version
	I0924 18:39:54.236361   46892 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 18:39:54.247815   46892 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0924 18:39:54.249324   46892 start.go:297] selected driver: none
	I0924 18:39:54.249342   46892 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.240 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 18:39:54.249473   46892 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 18:39:54.249497   46892 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0924 18:39:54.249799   46892 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0924 18:39:54.252112   46892 out.go:201] 
	W0924 18:39:54.253544   46892 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0924 18:39:54.254929   46892 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "150.444665ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "43.387561ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "151.659061ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "43.690196ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-d8mbq" [959fdd24-1732-4f2b-a668-a7af35991996] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-d8mbq" [959fdd24-1732-4f2b-a668-a7af35991996] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003837557s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1494: Took "327.146469ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://10.128.15.240:31545
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://10.128.15.240:31545
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-2w9cl" [ac815824-e056-4ee4-bc5d-f61888d625d9] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-2w9cl" [ac815824-e056-4ee4-bc5d-f61888d625d9] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003215976s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://10.128.15.240:32229
functional_test.go:1675: http://10.128.15.240:32229: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-2w9cl

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.128.15.240:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.128.15.240:32229
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.31s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (22.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f4a2da65-ea00-46cf-80f4-45d5b88d4ab1] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003904246s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fd0bdcf7-427e-4971-933a-4a81157467e7] Pending
helpers_test.go:344: "sp-pod" [fd0bdcf7-427e-4971-933a-4a81157467e7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [fd0bdcf7-427e-4971-933a-4a81157467e7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003499198s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml: (1.007758125s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [36ce5eef-9169-44ad-bef8-ecbdb490468f] Pending
helpers_test.go:344: "sp-pod" [36ce5eef-9169-44ad-bef8-ecbdb490468f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [36ce5eef-9169-44ad-bef8-ecbdb490468f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.002887746s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (22.70s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-294s8" [0f35b340-b31f-4f34-9011-6a3332835b20] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-294s8" [0f35b340-b31f-4f34-9011-6a3332835b20] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.004425351s
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-294s8 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-294s8 -- mysql -ppassword -e "show databases;": exit status 1 (118.923154ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0924 18:40:53.021724   10354 retry.go:31] will retry after 511.069335ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-294s8 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-294s8 -- mysql -ppassword -e "show databases;": exit status 1 (114.73475ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0924 18:40:53.648295   10354 retry.go:31] will retry after 1.053692352s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-294s8 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-294s8 -- mysql -ppassword -e "show databases;": exit status 1 (110.194073ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0924 18:40:54.812549   10354 retry.go:31] will retry after 2.25646572s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-294s8 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.44s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (14.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (14.394453416s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (14.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (14.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (14.644515355s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (14.64s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (13.7s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (13.701534053s)
--- PASS: TestImageBuild/serial/Setup (13.70s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (0.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
--- PASS: TestImageBuild/serial/NormalBuild (0.91s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.65s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.38s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.38s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.38s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.38s)

                                                
                                    
x
+
TestJSONOutput/start/Command (32.14s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (32.13477192s)
--- PASS: TestJSONOutput/start/Command (32.14s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.51s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.51s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.41s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.41s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.28s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (5.279019127s)
--- PASS: TestJSONOutput/stop/Command (5.28s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.527695ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4fc9fe91-9bdb-4c69-b662-7dcab8b890fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"99a8770a-d296-486b-8d61-606fdb149632","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19700"}}
	{"specversion":"1.0","id":"4c506241-3090-4ec3-9b46-126bcfbdd298","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1eb7bea1-3c35-44cf-a955-53819115ad77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19700-3578/kubeconfig"}}
	{"specversion":"1.0","id":"2295a65c-f0a5-46d8-ac64-f1e3b3b1a85b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3578/.minikube"}}
	{"specversion":"1.0","id":"52e73406-e948-4095-af62-2b40e4096742","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1db5f52a-822f-4e01-9303-cd0f45b90109","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6a8298a8-f321-4cc5-83de-433c504a90a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (33.41s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (13.882577314s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (17.628929981s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.318512871s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (33.41s)

                                                
                                    
x
+
TestPause/serial/Start (27.21s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (27.212781815s)
--- PASS: TestPause/serial/Start (27.21s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.59s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (29.592565193s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.59s)

                                                
                                    
x
+
TestPause/serial/Pause (0.51s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.51s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.13s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (131.757312ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.13s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.43s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.43s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.55s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.55s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.83s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.830589679s)
--- PASS: TestPause/serial/DeletePaused (1.83s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (67.8s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1742395156 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1742395156 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (29.684519627s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (34.30454102s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (3.17942688s)
--- PASS: TestRunningBinaryUpgrade (67.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (48.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.420361389 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.420361389 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (14.370609917s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.420361389 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.420361389 -p minikube stop: (23.638054607s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (10.755175405s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (48.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                    
x
+
TestKubernetesUpgrade (305.52s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (28.699496129s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (1.313595111s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (71.853358ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m15.800536014s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (68.792363ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19700
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19700-3578/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19700-3578/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (18.189590431s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.321843663s)
--- PASS: TestKubernetesUpgrade (305.52s)

                                                
                                    

Test skip (61/166)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.31.1/preload-exists 0
14 TestDownloadOnly/v1.31.1/cached-images 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Ingress 0
37 TestAddons/parallel/Olm 0
41 TestAddons/parallel/LocalPath 0
45 TestCertOptions 0
47 TestDockerFlags 0
48 TestForceSystemdFlag 0
49 TestForceSystemdEnv 0
50 TestDockerEnvContainerd 0
51 TestKVMDriverInstallOrUpdate 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
54 TestErrorSpam 0
63 TestFunctional/serial/CacheCmd 0
77 TestFunctional/parallel/MountCmd 0
94 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
95 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
96 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
97 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
98 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
99 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
100 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
101 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
102 TestFunctional/parallel/SSHCmd 0
103 TestFunctional/parallel/CpCmd 0
105 TestFunctional/parallel/FileSync 0
106 TestFunctional/parallel/CertSync 0
111 TestFunctional/parallel/DockerEnv 0
112 TestFunctional/parallel/PodmanEnv 0
114 TestFunctional/parallel/ImageCommands 0
115 TestFunctional/parallel/NonActiveRuntimeDisabled 0
123 TestGvisorAddon 0
124 TestMultiControlPlane 0
132 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
159 TestKicCustomNetwork 0
160 TestKicExistingNetwork 0
161 TestKicCustomSubnet 0
162 TestKicStaticIP 0
165 TestMountStart 0
166 TestMultiNode 0
167 TestNetworkPlugins 0
168 TestNoKubernetes 0
169 TestChangeNoneUser 0
180 TestPreload 0
181 TestScheduledStopWindows 0
182 TestScheduledStopUnix 0
183 TestSkaffold 0
186 TestStartStop/group/old-k8s-version 0.13
187 TestStartStop/group/newest-cni 0.13
188 TestStartStop/group/default-k8s-diff-port 0.13
189 TestStartStop/group/no-preload 0.13
190 TestStartStop/group/disable-driver-mounts 0.13
191 TestStartStop/group/embed-certs 0.13
192 TestInsufficientStorage 0
199 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:194: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:916: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1041: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1717: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1760: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1924: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1955: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:458: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:545: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:292: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2016: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.13s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.13s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard