Test Report: none_Linux 19679

                    
                      7cae0481c1ae024841826a3639f158d099448b48:2024-09-20:36298
                    
                

Test fail (1/167)

Order failed test Duration
33 TestAddons/parallel/Registry 71.79
x
+
TestAddons/parallel/Registry (71.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.794339ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-2j6bv" [853bfd79-8cfc-4715-b415-cd985cf6274d] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003019554s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-z6h9c" [13b5f103-a32f-4f30-ad42-68d95f995da9] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003310682s
addons_test.go:338: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.080517868s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/09/20 17:28:38 [DEBUG] GET http://10.138.0.48:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 20 Sep 24 17:16 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 20 Sep 24 17:16 UTC | 20 Sep 24 17:16 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 20 Sep 24 17:16 UTC | 20 Sep 24 17:16 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 20 Sep 24 17:16 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 20 Sep 24 17:16 UTC | 20 Sep 24 17:16 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 20 Sep 24 17:16 UTC | 20 Sep 24 17:16 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 20 Sep 24 17:16 UTC | 20 Sep 24 17:16 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 20 Sep 24 17:16 UTC | 20 Sep 24 17:16 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 20 Sep 24 17:16 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:44693               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 20 Sep 24 17:16 UTC | 20 Sep 24 17:16 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 20 Sep 24 17:16 UTC | 20 Sep 24 17:17 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 20 Sep 24 17:17 UTC | 20 Sep 24 17:17 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 20 Sep 24 17:17 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 20 Sep 24 17:17 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 20 Sep 24 17:17 UTC | 20 Sep 24 17:18 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 20 Sep 24 17:19 UTC | 20 Sep 24 17:19 UTC |
	|         | volcano --alsologtostderr -v=1       |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 20 Sep 24 17:28 UTC | 20 Sep 24 17:28 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 20 Sep 24 17:28 UTC | 20 Sep 24 17:28 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:17:05
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:17:05.521909  115828 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:17:05.522033  115828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:17:05.522042  115828 out.go:358] Setting ErrFile to fd 2...
	I0920 17:17:05.522047  115828 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:17:05.522219  115828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-105157/.minikube/bin
	I0920 17:17:05.522827  115828 out.go:352] Setting JSON to false
	I0920 17:17:05.523704  115828 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3577,"bootTime":1726849048,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:17:05.523813  115828 start.go:139] virtualization: kvm guest
	I0920 17:17:05.526325  115828 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0920 17:17:05.527902  115828 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19679-105157/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 17:17:05.527944  115828 notify.go:220] Checking for updates...
	I0920 17:17:05.528047  115828 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 17:17:05.529386  115828 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:17:05.530556  115828 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-105157/kubeconfig
	I0920 17:17:05.531782  115828 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-105157/.minikube
	I0920 17:17:05.533085  115828 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:17:05.534343  115828 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:17:05.535725  115828 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:17:05.546398  115828 out.go:177] * Using the none driver based on user configuration
	I0920 17:17:05.547671  115828 start.go:297] selected driver: none
	I0920 17:17:05.547692  115828 start.go:901] validating driver "none" against <nil>
	I0920 17:17:05.547705  115828 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:17:05.547745  115828 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0920 17:17:05.548086  115828 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0920 17:17:05.548660  115828 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 17:17:05.548950  115828 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:17:05.548984  115828 cni.go:84] Creating CNI manager for ""
	I0920 17:17:05.549035  115828 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 17:17:05.549050  115828 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 17:17:05.549100  115828 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:17:05.550876  115828 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0920 17:17:05.552607  115828 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/config.json ...
	I0920 17:17:05.552647  115828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/config.json: {Name:mkc563dfa1a9d2303296f91fc3b576b9c7465d8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:17:05.552819  115828 start.go:360] acquireMachinesLock for minikube: {Name:mk7b57314b06bbfd3d94b770f16bede56e579e68 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 17:17:05.552859  115828 start.go:364] duration metric: took 22.687µs to acquireMachinesLock for "minikube"
	I0920 17:17:05.552878  115828 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 17:17:05.552942  115828 start.go:125] createHost starting for "" (driver="none")
	I0920 17:17:05.554448  115828 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0920 17:17:05.555691  115828 exec_runner.go:51] Run: systemctl --version
	I0920 17:17:05.558317  115828 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0920 17:17:05.558346  115828 client.go:168] LocalClient.Create starting
	I0920 17:17:05.558423  115828 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-105157/.minikube/certs/ca.pem
	I0920 17:17:05.558453  115828 main.go:141] libmachine: Decoding PEM data...
	I0920 17:17:05.558468  115828 main.go:141] libmachine: Parsing certificate...
	I0920 17:17:05.558522  115828 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19679-105157/.minikube/certs/cert.pem
	I0920 17:17:05.558538  115828 main.go:141] libmachine: Decoding PEM data...
	I0920 17:17:05.558549  115828 main.go:141] libmachine: Parsing certificate...
	I0920 17:17:05.558873  115828 client.go:171] duration metric: took 518.566µs to LocalClient.Create
	I0920 17:17:05.558897  115828 start.go:167] duration metric: took 589.104µs to libmachine.API.Create "minikube"
	I0920 17:17:05.558902  115828 start.go:293] postStartSetup for "minikube" (driver="none")
	I0920 17:17:05.558933  115828 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:17:05.558969  115828 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:17:05.568662  115828 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 17:17:05.568715  115828 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 17:17:05.568730  115828 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 17:17:05.570751  115828 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0920 17:17:05.572097  115828 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-105157/.minikube/addons for local assets ...
	I0920 17:17:05.572161  115828 filesync.go:126] Scanning /home/jenkins/minikube-integration/19679-105157/.minikube/files for local assets ...
	I0920 17:17:05.572183  115828 start.go:296] duration metric: took 13.275319ms for postStartSetup
	I0920 17:17:05.573357  115828 profile.go:143] Saving config to /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/config.json ...
	I0920 17:17:05.573542  115828 start.go:128] duration metric: took 20.588789ms to createHost
	I0920 17:17:05.573553  115828 start.go:83] releasing machines lock for "minikube", held for 20.681576ms
	I0920 17:17:05.574557  115828 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 17:17:05.574624  115828 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0920 17:17:05.576565  115828 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 17:17:05.576622  115828 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:17:05.585828  115828 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 17:17:05.585853  115828 start.go:495] detecting cgroup driver to use...
	I0920 17:17:05.585878  115828 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 17:17:05.585976  115828 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:17:05.607184  115828 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0920 17:17:05.615890  115828 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 17:17:05.625050  115828 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 17:17:05.625140  115828 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 17:17:05.633799  115828 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 17:17:05.644528  115828 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 17:17:05.653371  115828 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 17:17:05.661757  115828 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:17:05.670462  115828 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 17:17:05.679952  115828 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 17:17:05.691332  115828 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 17:17:05.701215  115828 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:17:05.708621  115828 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:17:05.717423  115828 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0920 17:17:05.938668  115828 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0920 17:17:06.003263  115828 start.go:495] detecting cgroup driver to use...
	I0920 17:17:06.003310  115828 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 17:17:06.003422  115828 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:17:06.022348  115828 exec_runner.go:51] Run: which cri-dockerd
	I0920 17:17:06.023211  115828 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 17:17:06.030721  115828 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0920 17:17:06.030746  115828 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0920 17:17:06.030775  115828 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0920 17:17:06.037796  115828 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0920 17:17:06.037935  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1974753367 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0920 17:17:06.045914  115828 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0920 17:17:06.270588  115828 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0920 17:17:06.488160  115828 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 17:17:06.488304  115828 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0920 17:17:06.488316  115828 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0920 17:17:06.488355  115828 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0920 17:17:06.496330  115828 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0920 17:17:06.496466  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube806034283 /etc/docker/daemon.json
	I0920 17:17:06.504713  115828 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0920 17:17:06.719404  115828 exec_runner.go:51] Run: sudo systemctl restart docker
	I0920 17:17:07.026059  115828 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0920 17:17:07.037253  115828 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0920 17:17:07.053141  115828 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 17:17:07.063471  115828 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0920 17:17:07.280266  115828 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0920 17:17:07.481032  115828 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0920 17:17:07.684458  115828 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0920 17:17:07.698480  115828 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 17:17:07.708909  115828 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0920 17:17:07.910022  115828 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0920 17:17:07.976271  115828 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0920 17:17:07.976371  115828 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0920 17:17:07.977708  115828 start.go:563] Will wait 60s for crictl version
	I0920 17:17:07.977758  115828 exec_runner.go:51] Run: which crictl
	I0920 17:17:07.978742  115828 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0920 17:17:08.007712  115828 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0920 17:17:08.007771  115828 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0920 17:17:08.028659  115828 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0920 17:17:08.052166  115828 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0920 17:17:08.052244  115828 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0920 17:17:08.054996  115828 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0920 17:17:08.056297  115828 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 17:17:08.056421  115828 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 17:17:08.056434  115828 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.1 docker true true} ...
	I0920 17:17:08.056529  115828 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0920 17:17:08.056583  115828 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0920 17:17:08.101210  115828 cni.go:84] Creating CNI manager for ""
	I0920 17:17:08.101240  115828 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 17:17:08.101251  115828 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 17:17:08.101273  115828 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 17:17:08.101415  115828 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.138.0.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-2"
	  kubeletExtraArgs:
	    node-ip: 10.138.0.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 17:17:08.101488  115828 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:17:08.110043  115828 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 17:17:08.110097  115828 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 17:17:08.117728  115828 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0920 17:17:08.117774  115828 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:17:08.117795  115828 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0920 17:17:08.117855  115828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-105157/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 17:17:08.117866  115828 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 17:17:08.117910  115828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-105157/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 17:17:08.130285  115828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-105157/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 17:17:08.169429  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2590111613 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 17:17:08.175504  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3561397488 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 17:17:08.196497  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube476908668 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 17:17:08.261556  115828 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 17:17:08.269571  115828 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0920 17:17:08.269601  115828 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0920 17:17:08.269649  115828 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0920 17:17:08.276843  115828 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0920 17:17:08.276976  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube253265787 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0920 17:17:08.284351  115828 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0920 17:17:08.284368  115828 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0920 17:17:08.284404  115828 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0920 17:17:08.291286  115828 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:17:08.291416  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1784944219 /lib/systemd/system/kubelet.service
	I0920 17:17:08.298801  115828 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0920 17:17:08.298908  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4132063982 /var/tmp/minikube/kubeadm.yaml.new
	I0920 17:17:08.305994  115828 exec_runner.go:51] Run: grep 10.138.0.48	control-plane.minikube.internal$ /etc/hosts
	I0920 17:17:08.307176  115828 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0920 17:17:08.497847  115828 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0920 17:17:08.512193  115828 certs.go:68] Setting up /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube for IP: 10.138.0.48
	I0920 17:17:08.512215  115828 certs.go:194] generating shared ca certs ...
	I0920 17:17:08.512239  115828 certs.go:226] acquiring lock for ca certs: {Name:mk53fefc27b4164093c85b6fc9946b06841e8cf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:17:08.512380  115828 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19679-105157/.minikube/ca.key
	I0920 17:17:08.512434  115828 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19679-105157/.minikube/proxy-client-ca.key
	I0920 17:17:08.512447  115828 certs.go:256] generating profile certs ...
	I0920 17:17:08.512516  115828 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/client.key
	I0920 17:17:08.512534  115828 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/client.crt with IP's: []
	I0920 17:17:08.593720  115828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/client.crt ...
	I0920 17:17:08.593758  115828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/client.crt: {Name:mk2dbd3563730ff28bf2dd4ddfe1d830b6c0dec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:17:08.593906  115828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/client.key ...
	I0920 17:17:08.593921  115828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/client.key: {Name:mkb5b6835c495cab15895ad7c0d19a027c3ea11a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:17:08.594011  115828 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/apiserver.key.35c0634a
	I0920 17:17:08.594030  115828 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
	I0920 17:17:08.721145  115828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
	I0920 17:17:08.721179  115828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mk821b9bb844f30bedb76e7158ecf7d8aaa23448 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:17:08.721330  115828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/apiserver.key.35c0634a ...
	I0920 17:17:08.721346  115828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mk5c000e6e17509bd485c3822c858072d029618b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:17:08.721434  115828 certs.go:381] copying /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/apiserver.crt
	I0920 17:17:08.721532  115828 certs.go:385] copying /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/apiserver.key
	I0920 17:17:08.721613  115828 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/proxy-client.key
	I0920 17:17:08.721633  115828 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0920 17:17:08.801817  115828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/proxy-client.crt ...
	I0920 17:17:08.801849  115828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/proxy-client.crt: {Name:mkd59dbd97537695f08f6ca865fb92142bc1d4fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:17:08.802003  115828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/proxy-client.key ...
	I0920 17:17:08.802019  115828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/proxy-client.key: {Name:mk5846b7e5cac337937da01bed3acd08cdf9dfb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:17:08.802194  115828 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-105157/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 17:17:08.802238  115828 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-105157/.minikube/certs/ca.pem (1082 bytes)
	I0920 17:17:08.802274  115828 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-105157/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:17:08.802314  115828 certs.go:484] found cert: /home/jenkins/minikube-integration/19679-105157/.minikube/certs/key.pem (1679 bytes)
	I0920 17:17:08.802912  115828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-105157/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:17:08.803089  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3049942755 /var/lib/minikube/certs/ca.crt
	I0920 17:17:08.811555  115828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-105157/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:17:08.811684  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2670409339 /var/lib/minikube/certs/ca.key
	I0920 17:17:08.819496  115828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-105157/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:17:08.819628  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3046912287 /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 17:17:08.827114  115828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-105157/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 17:17:08.827243  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1871964418 /var/lib/minikube/certs/proxy-client-ca.key
	I0920 17:17:08.836772  115828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0920 17:17:08.836894  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1545253188 /var/lib/minikube/certs/apiserver.crt
	I0920 17:17:08.844023  115828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 17:17:08.844153  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube81412112 /var/lib/minikube/certs/apiserver.key
	I0920 17:17:08.852136  115828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:17:08.852266  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3245661596 /var/lib/minikube/certs/proxy-client.crt
	I0920 17:17:08.859414  115828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-105157/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 17:17:08.859531  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2860764289 /var/lib/minikube/certs/proxy-client.key
	I0920 17:17:08.866411  115828 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0920 17:17:08.866428  115828 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:17:08.866455  115828 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:17:08.873279  115828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19679-105157/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:17:08.873382  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2382202793 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:17:08.881091  115828 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 17:17:08.881186  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2845036876 /var/lib/minikube/kubeconfig
	I0920 17:17:08.888464  115828 exec_runner.go:51] Run: openssl version
	I0920 17:17:08.891246  115828 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:17:08.899289  115828 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:17:08.900596  115828 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 20 17:17 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:17:08.900647  115828 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:17:08.903433  115828 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:17:08.910843  115828 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:17:08.911914  115828 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 17:17:08.911954  115828 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:17:08.912071  115828 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 17:17:08.927242  115828 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 17:17:08.934999  115828 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 17:17:08.942432  115828 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0920 17:17:08.962190  115828 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 17:17:08.969890  115828 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 17:17:08.969913  115828 kubeadm.go:157] found existing configuration files:
	
	I0920 17:17:08.969946  115828 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 17:17:08.977250  115828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 17:17:08.977293  115828 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 17:17:08.983962  115828 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 17:17:08.991316  115828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 17:17:08.991363  115828 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 17:17:08.997996  115828 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 17:17:09.005211  115828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 17:17:09.005253  115828 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 17:17:09.012195  115828 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 17:17:09.019954  115828 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 17:17:09.019998  115828 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 17:17:09.026860  115828 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 17:17:09.057611  115828 kubeadm.go:310] W0920 17:17:09.057480  116696 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:17:09.058162  115828 kubeadm.go:310] W0920 17:17:09.058081  116696 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:17:09.059784  115828 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 17:17:09.059846  115828 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 17:17:09.149064  115828 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 17:17:09.149166  115828 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 17:17:09.149179  115828 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 17:17:09.149183  115828 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 17:17:09.158948  115828 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 17:17:09.162061  115828 out.go:235]   - Generating certificates and keys ...
	I0920 17:17:09.162112  115828 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 17:17:09.162134  115828 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 17:17:09.304223  115828 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 17:17:09.417268  115828 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 17:17:09.638756  115828 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 17:17:09.943489  115828 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 17:17:10.137226  115828 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 17:17:10.137315  115828 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0920 17:17:10.227110  115828 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 17:17:10.227197  115828 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0920 17:17:10.543812  115828 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 17:17:10.639353  115828 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 17:17:10.711435  115828 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 17:17:10.711585  115828 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 17:17:10.843245  115828 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 17:17:10.942094  115828 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 17:17:11.295913  115828 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 17:17:11.467066  115828 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 17:17:11.687206  115828 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 17:17:11.687705  115828 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 17:17:11.691147  115828 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 17:17:11.693247  115828 out.go:235]   - Booting up control plane ...
	I0920 17:17:11.693272  115828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 17:17:11.693292  115828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 17:17:11.693300  115828 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 17:17:11.713394  115828 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 17:17:11.717915  115828 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 17:17:11.717937  115828 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 17:17:11.954517  115828 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 17:17:11.954540  115828 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 17:17:12.455993  115828 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.454672ms
	I0920 17:17:12.456019  115828 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 17:17:17.458146  115828 kubeadm.go:310] [api-check] The API server is healthy after 5.002126263s
	I0920 17:17:17.469396  115828 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 17:17:17.480269  115828 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 17:17:17.497160  115828 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 17:17:17.497184  115828 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 17:17:17.503202  115828 kubeadm.go:310] [bootstrap-token] Using token: blqf35.e49ei8t8fj67u380
	I0920 17:17:17.504590  115828 out.go:235]   - Configuring RBAC rules ...
	I0920 17:17:17.504630  115828 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 17:17:17.508936  115828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 17:17:17.514690  115828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 17:17:17.517126  115828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 17:17:17.519524  115828 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 17:17:17.521895  115828 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 17:17:17.864121  115828 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 17:17:18.284662  115828 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 17:17:18.863747  115828 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 17:17:18.864494  115828 kubeadm.go:310] 
	I0920 17:17:18.864515  115828 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 17:17:18.864519  115828 kubeadm.go:310] 
	I0920 17:17:18.864524  115828 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 17:17:18.864528  115828 kubeadm.go:310] 
	I0920 17:17:18.864532  115828 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 17:17:18.864536  115828 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 17:17:18.864539  115828 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 17:17:18.864543  115828 kubeadm.go:310] 
	I0920 17:17:18.864547  115828 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 17:17:18.864550  115828 kubeadm.go:310] 
	I0920 17:17:18.864554  115828 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 17:17:18.864557  115828 kubeadm.go:310] 
	I0920 17:17:18.864566  115828 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 17:17:18.864570  115828 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 17:17:18.864576  115828 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 17:17:18.864579  115828 kubeadm.go:310] 
	I0920 17:17:18.864583  115828 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 17:17:18.864587  115828 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 17:17:18.864591  115828 kubeadm.go:310] 
	I0920 17:17:18.864595  115828 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token blqf35.e49ei8t8fj67u380 \
	I0920 17:17:18.864600  115828 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:85b7ac9760f6e01a7c30b5525a6f2b09b90cf729cdfe4f5b1cd35876bee11932 \
	I0920 17:17:18.864605  115828 kubeadm.go:310] 	--control-plane 
	I0920 17:17:18.864615  115828 kubeadm.go:310] 
	I0920 17:17:18.864619  115828 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 17:17:18.864626  115828 kubeadm.go:310] 
	I0920 17:17:18.864630  115828 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token blqf35.e49ei8t8fj67u380 \
	I0920 17:17:18.864634  115828 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:85b7ac9760f6e01a7c30b5525a6f2b09b90cf729cdfe4f5b1cd35876bee11932 
	I0920 17:17:18.867600  115828 cni.go:84] Creating CNI manager for ""
	I0920 17:17:18.867626  115828 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 17:17:18.869361  115828 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 17:17:18.870618  115828 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0920 17:17:18.881661  115828 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 17:17:18.881788  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2476045591 /etc/cni/net.d/1-k8s.conflist
	I0920 17:17:18.892387  115828 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 17:17:18.892503  115828 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_09_20T17_17_18_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0920 17:17:18.892999  115828 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:17:18.902061  115828 ops.go:34] apiserver oom_adj: -16
	I0920 17:17:18.959498  115828 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:17:19.459834  115828 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:17:19.960520  115828 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:17:20.460115  115828 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:17:20.959696  115828 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:17:21.460449  115828 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:17:21.960359  115828 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:17:22.459853  115828 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:17:22.959679  115828 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:17:23.021771  115828 kubeadm.go:1113] duration metric: took 4.129368135s to wait for elevateKubeSystemPrivileges
	I0920 17:17:23.021812  115828 kubeadm.go:394] duration metric: took 14.109864047s to StartCluster
	I0920 17:17:23.021840  115828 settings.go:142] acquiring lock: {Name:mk7f0a5a8dd197edbadb5c6aaf5e16cbc27fb68d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:17:23.021925  115828 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19679-105157/kubeconfig
	I0920 17:17:23.022530  115828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19679-105157/kubeconfig: {Name:mkd0d6345a699320a1720cb50a2bcc36b8896a86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:17:23.022757  115828 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 17:17:23.022837  115828 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 17:17:23.022966  115828 addons.go:69] Setting yakd=true in profile "minikube"
	I0920 17:17:23.022972  115828 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0920 17:17:23.022975  115828 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:17:23.022991  115828 addons.go:234] Setting addon yakd=true in "minikube"
	I0920 17:17:23.022993  115828 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0920 17:17:23.023016  115828 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0920 17:17:23.023011  115828 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0920 17:17:23.023027  115828 host.go:66] Checking if "minikube" exists ...
	I0920 17:17:23.023029  115828 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0920 17:17:23.023024  115828 addons.go:69] Setting volcano=true in profile "minikube"
	I0920 17:17:23.023038  115828 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0920 17:17:23.023044  115828 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0920 17:17:23.023049  115828 addons.go:234] Setting addon volcano=true in "minikube"
	I0920 17:17:23.023055  115828 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0920 17:17:23.023058  115828 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0920 17:17:23.023081  115828 host.go:66] Checking if "minikube" exists ...
	I0920 17:17:23.023088  115828 host.go:66] Checking if "minikube" exists ...
	I0920 17:17:23.023101  115828 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0920 17:17:23.023035  115828 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0920 17:17:23.023121  115828 host.go:66] Checking if "minikube" exists ...
	I0920 17:17:23.023132  115828 host.go:66] Checking if "minikube" exists ...
	I0920 17:17:23.023332  115828 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0920 17:17:23.023422  115828 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0920 17:17:23.023480  115828 host.go:66] Checking if "minikube" exists ...
	I0920 17:17:23.023655  115828 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:17:23.023671  115828 api_server.go:166] Checking apiserver status ...
	I0920 17:17:23.023705  115828 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:17:23.023742  115828 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:17:23.023757  115828 api_server.go:166] Checking apiserver status ...
	I0920 17:17:23.023756  115828 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:17:23.023771  115828 api_server.go:166] Checking apiserver status ...
	I0920 17:17:23.023771  115828 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0920 17:17:23.023772  115828 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0920 17:17:23.023788  115828 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:17:23.023787  115828 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0920 17:17:23.023796  115828 mustload.go:65] Loading cluster: minikube
	I0920 17:17:23.023807  115828 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:17:23.023997  115828 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:17:23.024015  115828 api_server.go:166] Checking apiserver status ...
	I0920 17:17:23.024016  115828 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:17:23.024102  115828 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:17:23.023030  115828 host.go:66] Checking if "minikube" exists ...
	I0920 17:17:23.024277  115828 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:17:23.024291  115828 api_server.go:166] Checking apiserver status ...
	I0920 17:17:23.024319  115828 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:17:23.023759  115828 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0920 17:17:23.025018  115828 out.go:177] * Configuring local host environment ...
	I0920 17:17:23.025382  115828 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:17:23.025413  115828 api_server.go:166] Checking apiserver status ...
	I0920 17:17:23.025454  115828 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:17:23.023742  115828 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:17:23.025649  115828 api_server.go:166] Checking apiserver status ...
	I0920 17:17:23.025691  115828 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0920 17:17:23.026435  115828 out.go:270] * 
	W0920 17:17:23.026473  115828 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0920 17:17:23.026486  115828 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0920 17:17:23.026492  115828 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0920 17:17:23.026499  115828 out.go:270] * 
	W0920 17:17:23.026553  115828 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0920 17:17:23.026560  115828 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0920 17:17:23.026566  115828 out.go:270] * 
	W0920 17:17:23.026603  115828 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0920 17:17:23.026611  115828 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0920 17:17:23.026624  115828 out.go:270] * 
	W0920 17:17:23.026630  115828 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0920 17:17:23.026655  115828 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 17:17:23.027816  115828 addons.go:69] Setting registry=true in profile "minikube"
	I0920 17:17:23.027841  115828 addons.go:234] Setting addon registry=true in "minikube"
	I0920 17:17:23.027863  115828 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0920 17:17:23.027884  115828 host.go:66] Checking if "minikube" exists ...
	I0920 17:17:23.027902  115828 host.go:66] Checking if "minikube" exists ...
	I0920 17:17:23.028366  115828 out.go:177] * Verifying Kubernetes components...
	I0920 17:17:23.023050  115828 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0920 17:17:23.023756  115828 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:17:23.028879  115828 api_server.go:166] Checking apiserver status ...
	I0920 17:17:23.028993  115828 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:17:23.029396  115828 api_server.go:166] Checking apiserver status ...
	I0920 17:17:23.029423  115828 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:17:23.029437  115828 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:17:23.029451  115828 api_server.go:166] Checking apiserver status ...
	I0920 17:17:23.029482  115828 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:17:23.029690  115828 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:17:23.029715  115828 api_server.go:166] Checking apiserver status ...
	I0920 17:17:23.029749  115828 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:17:23.029812  115828 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:17:23.029835  115828 api_server.go:166] Checking apiserver status ...
	I0920 17:17:23.029870  115828 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:17:23.028834  115828 host.go:66] Checking if "minikube" exists ...
	I0920 17:17:23.029357  115828 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0920 17:17:23.030608  115828 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:17:23.030655  115828 api_server.go:166] Checking apiserver status ...
	I0920 17:17:23.030698  115828 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:17:23.029383  115828 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:17:23.044097  115828 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/117128/cgroup
	I0920 17:17:23.051430  115828 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/117128/cgroup
	I0920 17:17:23.059763  115828 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/117128/cgroup
	I0920 17:17:23.060835  115828 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/117128/cgroup
	I0920 17:17:23.063753  115828 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/117128/cgroup
	I0920 17:17:23.075230  115828 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec"
	I0920 17:17:23.075305  115828 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec/freezer.state
	I0920 17:17:23.078206  115828 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec"
	I0920 17:17:23.078272  115828 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec/freezer.state
	I0920 17:17:23.081420  115828 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec"
	I0920 17:17:23.081490  115828 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec/freezer.state
	I0920 17:17:23.084186  115828 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec"
	I0920 17:17:23.084250  115828 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec/freezer.state
	I0920 17:17:23.090838  115828 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec"
	I0920 17:17:23.090911  115828 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec/freezer.state
	I0920 17:17:23.092978  115828 api_server.go:204] freezer state: "THAWED"
	I0920 17:17:23.093007  115828 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:17:23.095782  115828 api_server.go:204] freezer state: "THAWED"
	I0920 17:17:23.095807  115828 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:17:23.098571  115828 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:17:23.099070  115828 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/117128/cgroup
	I0920 17:17:23.099988  115828 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:17:23.100511  115828 api_server.go:204] freezer state: "THAWED"
	I0920 17:17:23.100554  115828 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:17:23.105261  115828 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0920 17:17:23.105835  115828 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 17:17:23.108100  115828 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:17:23.108124  115828 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0920 17:17:23.108132  115828 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:17:23.108172  115828 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:17:23.108333  115828 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0920 17:17:23.108953  115828 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/117128/cgroup
	I0920 17:17:23.112165  115828 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:17:23.115609  115828 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/117128/cgroup
	I0920 17:17:23.116444  115828 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0920 17:17:23.120331  115828 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 17:17:23.122386  115828 api_server.go:204] freezer state: "THAWED"
	I0920 17:17:23.122414  115828 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:17:23.122433  115828 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 17:17:23.122461  115828 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/117128/cgroup
	I0920 17:17:23.122465  115828 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0920 17:17:23.122642  115828 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 17:17:23.122664  115828 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 17:17:23.122800  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube14255100 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 17:17:23.123035  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4169739518 /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 17:17:23.124155  115828 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/117128/cgroup
	I0920 17:17:23.125117  115828 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/117128/cgroup
	I0920 17:17:23.125408  115828 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/117128/cgroup
	I0920 17:17:23.127097  115828 api_server.go:204] freezer state: "THAWED"
	I0920 17:17:23.127122  115828 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:17:23.127369  115828 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:17:23.127388  115828 host.go:66] Checking if "minikube" exists ...
	I0920 17:17:23.128520  115828 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec"
	I0920 17:17:23.128587  115828 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec/freezer.state
	I0920 17:17:23.129829  115828 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec"
	I0920 17:17:23.129878  115828 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec/freezer.state
	I0920 17:17:23.132226  115828 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:17:23.133191  115828 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0920 17:17:23.133242  115828 host.go:66] Checking if "minikube" exists ...
	I0920 17:17:23.133889  115828 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:17:23.133914  115828 api_server.go:166] Checking apiserver status ...
	I0920 17:17:23.133946  115828 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:17:23.135237  115828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 17:17:23.135265  115828 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 17:17:23.135383  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2863387781 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 17:17:23.137189  115828 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 17:17:23.138885  115828 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec"
	I0920 17:17:23.138931  115828 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec/freezer.state
	I0920 17:17:23.139849  115828 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 17:17:23.139970  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2652149578 /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:17:23.141588  115828 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec"
	I0920 17:17:23.141638  115828 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec/freezer.state
	I0920 17:17:23.142645  115828 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec"
	I0920 17:17:23.142691  115828 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec/freezer.state
	I0920 17:17:23.144898  115828 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/117128/cgroup
	I0920 17:17:23.152222  115828 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec"
	I0920 17:17:23.152278  115828 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec/freezer.state
	I0920 17:17:23.153054  115828 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec"
	I0920 17:17:23.153102  115828 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec/freezer.state
	I0920 17:17:23.163057  115828 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/117128/cgroup
	I0920 17:17:23.165186  115828 api_server.go:204] freezer state: "THAWED"
	I0920 17:17:23.165209  115828 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:17:23.165186  115828 api_server.go:204] freezer state: "THAWED"
	I0920 17:17:23.165225  115828 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:17:23.165631  115828 api_server.go:204] freezer state: "THAWED"
	I0920 17:17:23.165652  115828 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:17:23.165878  115828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 17:17:23.165901  115828 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 17:17:23.166011  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2678213494 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 17:17:23.166450  115828 api_server.go:204] freezer state: "THAWED"
	I0920 17:17:23.166469  115828 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:17:23.170257  115828 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:17:23.170438  115828 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:17:23.171178  115828 api_server.go:204] freezer state: "THAWED"
	I0920 17:17:23.171200  115828 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:17:23.171270  115828 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:17:23.172007  115828 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:17:23.172221  115828 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:17:23.172459  115828 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0920 17:17:23.172501  115828 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 17:17:23.173429  115828 api_server.go:204] freezer state: "THAWED"
	I0920 17:17:23.173535  115828 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:17:23.175074  115828 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 17:17:23.175104  115828 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 17:17:23.175228  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube579853920 /etc/kubernetes/addons/deployment.yaml
	I0920 17:17:23.175397  115828 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 17:17:23.175581  115828 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:17:23.175805  115828 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 17:17:23.176762  115828 api_server.go:204] freezer state: "THAWED"
	I0920 17:17:23.176785  115828 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:17:23.176893  115828 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 17:17:23.176917  115828 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 17:17:23.177030  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2034855305 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 17:17:23.177790  115828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 17:17:23.179147  115828 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 17:17:23.179199  115828 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 17:17:23.179220  115828 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 17:17:23.179328  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1555813072 /etc/kubernetes/addons/yakd-ns.yaml
	I0920 17:17:23.181254  115828 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:17:23.181469  115828 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:17:23.181661  115828 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 17:17:23.181693  115828 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 17:17:23.181758  115828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 17:17:23.181809  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube907307166 /etc/kubernetes/addons/registry-rc.yaml
	I0920 17:17:23.181990  115828 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec"
	I0920 17:17:23.182043  115828 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec/freezer.state
	I0920 17:17:23.182669  115828 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec"
	I0920 17:17:23.182725  115828 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec/freezer.state
	I0920 17:17:23.183200  115828 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 17:17:23.183281  115828 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 17:17:23.185113  115828 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 17:17:23.185144  115828 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 17:17:23.185259  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3470731530 /etc/kubernetes/addons/ig-namespace.yaml
	I0920 17:17:23.185773  115828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 17:17:23.185838  115828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 17:17:23.185872  115828 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 17:17:23.186003  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1728088114 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 17:17:23.189343  115828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 17:17:23.190684  115828 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 17:17:23.194755  115828 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 17:17:23.194783  115828 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 17:17:23.194932  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1034627341 /etc/kubernetes/addons/yakd-sa.yaml
	I0920 17:17:23.200285  115828 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 17:17:23.200888  115828 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 17:17:23.202490  115828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 17:17:23.203838  115828 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 17:17:23.205233  115828 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 17:17:23.205278  115828 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 17:17:23.205408  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3398835875 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 17:17:23.209978  115828 api_server.go:204] freezer state: "THAWED"
	I0920 17:17:23.210004  115828 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:17:23.211023  115828 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 17:17:23.211076  115828 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 17:17:23.211251  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1619377834 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 17:17:23.211828  115828 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 17:17:23.213414  115828 api_server.go:204] freezer state: "THAWED"
	I0920 17:17:23.213437  115828 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:17:23.217662  115828 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 17:17:23.217670  115828 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:17:23.217689  115828 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 17:17:23.217806  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2920577054 /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 17:17:23.218907  115828 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:17:23.218915  115828 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0920 17:17:23.218968  115828 host.go:66] Checking if "minikube" exists ...
	I0920 17:17:23.219820  115828 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:17:23.219839  115828 api_server.go:166] Checking apiserver status ...
	I0920 17:17:23.219874  115828 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:17:23.222321  115828 out.go:177]   - Using image docker.io/busybox:stable
	I0920 17:17:23.228165  115828 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 17:17:23.228243  115828 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 17:17:23.228364  115828 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 17:17:23.229033  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2659614497 /etc/kubernetes/addons/registry-svc.yaml
	I0920 17:17:23.229141  115828 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 17:17:23.229167  115828 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 17:17:23.229309  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube294407788 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 17:17:23.230485  115828 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 17:17:23.230507  115828 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 17:17:23.230600  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1599458162 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 17:17:23.231008  115828 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 17:17:23.233426  115828 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 17:17:23.233455  115828 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 17:17:23.233569  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1978895646 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 17:17:23.235514  115828 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 17:17:23.235537  115828 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 17:17:23.235661  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1692998508 /etc/kubernetes/addons/yakd-crb.yaml
	I0920 17:17:23.248440  115828 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 17:17:23.248473  115828 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 17:17:23.248630  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1613165523 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 17:17:23.250508  115828 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 17:17:23.250536  115828 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 17:17:23.250657  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2096293941 /etc/kubernetes/addons/ig-role.yaml
	I0920 17:17:23.257593  115828 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 17:17:23.257634  115828 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 17:17:23.257664  115828 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 17:17:23.257803  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1931386945 /etc/kubernetes/addons/registry-proxy.yaml
	I0920 17:17:23.260363  115828 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 17:17:23.260389  115828 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 17:17:23.260515  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2489623033 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 17:17:23.262197  115828 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 17:17:23.262222  115828 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 17:17:23.262332  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4117650083 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 17:17:23.262881  115828 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/117128/cgroup
	I0920 17:17:23.278684  115828 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 17:17:23.278721  115828 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 17:17:23.278844  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2038911609 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 17:17:23.284433  115828 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 17:17:23.284465  115828 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 17:17:23.284584  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1504098436 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 17:17:23.284975  115828 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 17:17:23.284998  115828 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 17:17:23.285103  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4293875894 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 17:17:23.290232  115828 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 17:17:23.298579  115828 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec"
	I0920 17:17:23.298635  115828 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec/freezer.state
	I0920 17:17:23.313865  115828 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 17:17:23.319273  115828 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:17:23.319309  115828 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 17:17:23.319434  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1363667296 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:17:23.321033  115828 api_server.go:204] freezer state: "THAWED"
	I0920 17:17:23.321064  115828 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:17:23.323810  115828 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 17:17:23.323847  115828 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 17:17:23.323973  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3920290329 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 17:17:23.324160  115828 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 17:17:23.324190  115828 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 17:17:23.324301  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2870264300 /etc/kubernetes/addons/yakd-svc.yaml
	I0920 17:17:23.326751  115828 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:17:23.326795  115828 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 17:17:23.326808  115828 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0920 17:17:23.326819  115828 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0920 17:17:23.326864  115828 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0920 17:17:23.335117  115828 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 17:17:23.335148  115828 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 17:17:23.335270  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube11625445 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 17:17:23.341049  115828 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 17:17:23.341080  115828 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 17:17:23.341209  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1136694481 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 17:17:23.382972  115828 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 17:17:23.383153  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube258793836 /etc/kubernetes/addons/storageclass.yaml
	I0920 17:17:23.385424  115828 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 17:17:23.385455  115828 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 17:17:23.385575  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube531201460 /etc/kubernetes/addons/ig-crd.yaml
	I0920 17:17:23.402391  115828 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:17:23.415067  115828 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 17:17:23.415100  115828 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 17:17:23.415218  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube617251193 /etc/kubernetes/addons/yakd-dp.yaml
	I0920 17:17:23.423884  115828 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 17:17:23.423928  115828 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 17:17:23.424055  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube608465653 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 17:17:23.427524  115828 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0920 17:17:23.447949  115828 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 17:17:23.453950  115828 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 17:17:23.453985  115828 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 17:17:23.454111  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3756573474 /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 17:17:23.484724  115828 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 17:17:23.499606  115828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 17:17:23.499645  115828 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 17:17:23.499779  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4186287444 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 17:17:23.534732  115828 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 17:17:23.569244  115828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 17:17:23.569287  115828 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 17:17:23.569427  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3881670023 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 17:17:23.610547  115828 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
	I0920 17:17:23.613794  115828 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
	I0920 17:17:23.613820  115828 node_ready.go:38] duration metric: took 3.233425ms for node "ubuntu-20-agent-2" to be "Ready" ...
	I0920 17:17:23.613832  115828 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:17:23.623431  115828 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-688vb" in "kube-system" namespace to be "Ready" ...
	I0920 17:17:23.673255  115828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 17:17:23.673301  115828 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 17:17:23.673474  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2114231022 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 17:17:23.689833  115828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 17:17:23.689871  115828 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 17:17:23.690016  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4248866394 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 17:17:23.742100  115828 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0920 17:17:23.766089  115828 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 17:17:23.766132  115828 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 17:17:23.766263  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1273488777 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 17:17:23.900070  115828 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 17:17:24.040159  115828 addons.go:475] Verifying addon registry=true in "minikube"
	I0920 17:17:24.046967  115828 out.go:177] * Verifying registry addon...
	I0920 17:17:24.049329  115828 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 17:17:24.066161  115828 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 17:17:24.066192  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:24.248893  115828 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0920 17:17:24.509677  115828 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.024884177s)
	I0920 17:17:24.516944  115828 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0920 17:17:24.563486  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:24.674253  115828 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.416594423s)
	I0920 17:17:24.674287  115828 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0920 17:17:24.682431  115828 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.368519133s)
	I0920 17:17:24.851702  115828 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.316896202s)
	I0920 17:17:25.053772  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:25.102011  115828 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.699561118s)
	W0920 17:17:25.102101  115828 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 17:17:25.102169  115828 retry.go:31] will retry after 304.325425ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 17:17:25.407519  115828 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:17:25.558246  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:25.632396  115828 pod_ready.go:103] pod "coredns-7c65d6cfc9-688vb" in "kube-system" namespace has status "Ready":"False"
	I0920 17:17:26.084926  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:26.252460  115828 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.352319787s)
	I0920 17:17:26.252501  115828 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0920 17:17:26.265730  115828 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 17:17:26.268579  115828 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 17:17:26.276599  115828 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 17:17:26.276631  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:26.337548  115828 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.200314635s)
	I0920 17:17:26.553360  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:26.774739  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:27.054009  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:27.274003  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:27.553011  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:27.773811  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:28.053112  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:28.129829  115828 pod_ready.go:103] pod "coredns-7c65d6cfc9-688vb" in "kube-system" namespace has status "Ready":"False"
	I0920 17:17:28.170796  115828 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.763217s)
	I0920 17:17:28.273738  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:28.553384  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:28.773816  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:29.053896  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:29.274048  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:29.553287  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:29.773831  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:30.052937  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:30.183204  115828 pod_ready.go:103] pod "coredns-7c65d6cfc9-688vb" in "kube-system" namespace has status "Ready":"False"
	I0920 17:17:30.184839  115828 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 17:17:30.185007  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4256810567 /var/lib/minikube/google_application_credentials.json
	I0920 17:17:30.197434  115828 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 17:17:30.197546  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube604199196 /var/lib/minikube/google_cloud_project
	I0920 17:17:30.208299  115828 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0920 17:17:30.208350  115828 host.go:66] Checking if "minikube" exists ...
	I0920 17:17:30.208907  115828 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:17:30.208928  115828 api_server.go:166] Checking apiserver status ...
	I0920 17:17:30.208956  115828 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:17:30.225628  115828 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/117128/cgroup
	I0920 17:17:30.234552  115828 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec"
	I0920 17:17:30.234608  115828 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/c14f9ee625a4957f0613efa81926274c7a973b56aa3dd43681032f02019b0dec/freezer.state
	I0920 17:17:30.243092  115828 api_server.go:204] freezer state: "THAWED"
	I0920 17:17:30.243116  115828 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:17:30.260947  115828 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:17:30.261022  115828 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 17:17:30.272930  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:30.334121  115828 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 17:17:30.408611  115828 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 17:17:30.470870  115828 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 17:17:30.470921  115828 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 17:17:30.471490  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1128142932 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 17:17:30.498540  115828 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 17:17:30.498583  115828 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 17:17:30.498715  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube135105315 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 17:17:30.508891  115828 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 17:17:30.508920  115828 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 17:17:30.509020  115828 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3065371787 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 17:17:30.517993  115828 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 17:17:30.553551  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:30.772407  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:31.015169  115828 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0920 17:17:31.016801  115828 out.go:177] * Verifying gcp-auth addon...
	I0920 17:17:31.019440  115828 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 17:17:31.023631  115828 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 17:17:31.124762  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:31.273013  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:31.553417  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:31.629042  115828 pod_ready.go:98] pod "coredns-7c65d6cfc9-688vb" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:17:31 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:17:23 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:17:23 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:17:23 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:17:23 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.48 HostIPs:[{IP:10.138.0.48}]
PodIP:10.244.0.5 PodIPs:[{IP:10.244.0.5}] StartTime:2024-09-20 17:17:23 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-20 17:17:24 +0000 UTC,FinishedAt:2024-09-20 17:17:31 +0000 UTC,ContainerID:docker://2f0edc6f3f8c0c512206171e33613f66c72305062f8956d145c5bdd706985438,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://2f0edc6f3f8c0c512206171e33613f66c72305062f8956d145c5bdd706985438 Started:0xc001756f40 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002375280} {Name:kube-api-access-5xrjs MountPath:/var/run/secrets/kubernetes.io/serviceaccount R
eadOnly:true RecursiveReadOnly:0xc002375290}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 17:17:31.629090  115828 pod_ready.go:82] duration metric: took 8.005135802s for pod "coredns-7c65d6cfc9-688vb" in "kube-system" namespace to be "Ready" ...
	E0920 17:17:31.629105  115828 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-688vb" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:17:31 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:17:23 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:17:23 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:17:23 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:17:23 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.
48 HostIPs:[{IP:10.138.0.48}] PodIP:10.244.0.5 PodIPs:[{IP:10.244.0.5}] StartTime:2024-09-20 17:17:23 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-20 17:17:24 +0000 UTC,FinishedAt:2024-09-20 17:17:31 +0000 UTC,ContainerID:docker://2f0edc6f3f8c0c512206171e33613f66c72305062f8956d145c5bdd706985438,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://2f0edc6f3f8c0c512206171e33613f66c72305062f8956d145c5bdd706985438 Started:0xc001756f40 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002375280} {Name:kube-api-access-5xrjs MountPath:/var/run/secrets/k
ubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc002375290}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 17:17:31.629118  115828 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fsp56" in "kube-system" namespace to be "Ready" ...
	I0920 17:17:31.633337  115828 pod_ready.go:93] pod "coredns-7c65d6cfc9-fsp56" in "kube-system" namespace has status "Ready":"True"
	I0920 17:17:31.633360  115828 pod_ready.go:82] duration metric: took 4.228541ms for pod "coredns-7c65d6cfc9-fsp56" in "kube-system" namespace to be "Ready" ...
	I0920 17:17:31.633371  115828 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 17:17:31.637239  115828 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0920 17:17:31.637264  115828 pod_ready.go:82] duration metric: took 3.883872ms for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 17:17:31.637275  115828 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 17:17:31.641429  115828 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0920 17:17:31.641454  115828 pod_ready.go:82] duration metric: took 4.170345ms for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 17:17:31.641466  115828 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 17:17:31.773459  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:32.123662  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:32.272647  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:32.552570  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:32.773250  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:33.052216  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:33.146872  115828 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0920 17:17:33.146897  115828 pod_ready.go:82] duration metric: took 1.505420484s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 17:17:33.146906  115828 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w82jh" in "kube-system" namespace to be "Ready" ...
	I0920 17:17:33.227432  115828 pod_ready.go:93] pod "kube-proxy-w82jh" in "kube-system" namespace has status "Ready":"True"
	I0920 17:17:33.227457  115828 pod_ready.go:82] duration metric: took 80.544523ms for pod "kube-proxy-w82jh" in "kube-system" namespace to be "Ready" ...
	I0920 17:17:33.227466  115828 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 17:17:33.272637  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:33.624419  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:33.626934  115828 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0920 17:17:33.626961  115828 pod_ready.go:82] duration metric: took 399.486971ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 17:17:33.626975  115828 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-mjmjj" in "kube-system" namespace to be "Ready" ...
	I0920 17:17:33.772920  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:34.026388  115828 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-mjmjj" in "kube-system" namespace has status "Ready":"True"
	I0920 17:17:34.026418  115828 pod_ready.go:82] duration metric: took 399.430605ms for pod "nvidia-device-plugin-daemonset-mjmjj" in "kube-system" namespace to be "Ready" ...
	I0920 17:17:34.026429  115828 pod_ready.go:39] duration metric: took 10.412581256s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:17:34.026453  115828 api_server.go:52] waiting for apiserver process to appear ...
	I0920 17:17:34.026515  115828 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:17:34.045670  115828 api_server.go:72] duration metric: took 11.01897391s to wait for apiserver process to appear ...
	I0920 17:17:34.045701  115828 api_server.go:88] waiting for apiserver healthz status ...
	I0920 17:17:34.045723  115828 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:17:34.049721  115828 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:17:34.050706  115828 api_server.go:141] control plane version: v1.31.1
	I0920 17:17:34.050730  115828 api_server.go:131] duration metric: took 5.022221ms to wait for apiserver health ...
	I0920 17:17:34.050741  115828 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 17:17:34.125094  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:34.232952  115828 system_pods.go:59] 16 kube-system pods found
	I0920 17:17:34.232986  115828 system_pods.go:61] "coredns-7c65d6cfc9-fsp56" [8a3ebdf0-774d-4cf9-bdf7-3e87a3cd2257] Running
	I0920 17:17:34.232997  115828 system_pods.go:61] "csi-hostpath-attacher-0" [b9df6fe1-219c-42d4-9cad-b4b995614589] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 17:17:34.233005  115828 system_pods.go:61] "csi-hostpath-resizer-0" [d3b5fc97-e3a7-441c-8d11-41378988a91f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 17:17:34.233022  115828 system_pods.go:61] "csi-hostpathplugin-gb6g5" [51b84f1a-1680-443c-a9f5-d5fcd90f01a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 17:17:34.233028  115828 system_pods.go:61] "etcd-ubuntu-20-agent-2" [0f512d5a-f8b6-4df2-b122-f90c95b26833] Running
	I0920 17:17:34.233035  115828 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [7ad2f538-3136-4678-8724-1d076b64ced0] Running
	I0920 17:17:34.233043  115828 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [bf9450d6-e953-4ecd-81f1-3db318e95f36] Running
	I0920 17:17:34.233050  115828 system_pods.go:61] "kube-proxy-w82jh" [eb27dfb4-acfc-4559-8809-bd2d259bd6c4] Running
	I0920 17:17:34.233058  115828 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [0cb38b70-334b-490a-a274-bf06cc229b3e] Running
	I0920 17:17:34.233067  115828 system_pods.go:61] "metrics-server-84c5f94fbc-x8j4s" [61012467-29a4-43b5-91a0-1bdfaa8a6bb1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 17:17:34.233075  115828 system_pods.go:61] "nvidia-device-plugin-daemonset-mjmjj" [31b9e06e-6478-4e30-b020-c8d37a2c816c] Running
	I0920 17:17:34.233083  115828 system_pods.go:61] "registry-66c9cd494c-2j6bv" [853bfd79-8cfc-4715-b415-cd985cf6274d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 17:17:34.233091  115828 system_pods.go:61] "registry-proxy-z6h9c" [13b5f103-a32f-4f30-ad42-68d95f995da9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 17:17:34.233107  115828 system_pods.go:61] "snapshot-controller-56fcc65765-2mhf9" [d5609f93-1bb4-4132-b88c-8da35c0e6d74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:17:34.233121  115828 system_pods.go:61] "snapshot-controller-56fcc65765-zk4nf" [29f1c218-ae8f-46d2-8ca8-184055340930] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:17:34.233128  115828 system_pods.go:61] "storage-provisioner" [2fc1d507-8a9b-46cb-97b2-89f5178b1a8b] Running
	I0920 17:17:34.233138  115828 system_pods.go:74] duration metric: took 182.389632ms to wait for pod list to return data ...
	I0920 17:17:34.233152  115828 default_sa.go:34] waiting for default service account to be created ...
	I0920 17:17:34.273550  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:34.427660  115828 default_sa.go:45] found service account: "default"
	I0920 17:17:34.427686  115828 default_sa.go:55] duration metric: took 194.527887ms for default service account to be created ...
	I0920 17:17:34.427696  115828 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 17:17:34.553923  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:34.658977  115828 system_pods.go:86] 16 kube-system pods found
	I0920 17:17:34.659006  115828 system_pods.go:89] "coredns-7c65d6cfc9-fsp56" [8a3ebdf0-774d-4cf9-bdf7-3e87a3cd2257] Running
	I0920 17:17:34.659017  115828 system_pods.go:89] "csi-hostpath-attacher-0" [b9df6fe1-219c-42d4-9cad-b4b995614589] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 17:17:34.659023  115828 system_pods.go:89] "csi-hostpath-resizer-0" [d3b5fc97-e3a7-441c-8d11-41378988a91f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 17:17:34.659045  115828 system_pods.go:89] "csi-hostpathplugin-gb6g5" [51b84f1a-1680-443c-a9f5-d5fcd90f01a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 17:17:34.659055  115828 system_pods.go:89] "etcd-ubuntu-20-agent-2" [0f512d5a-f8b6-4df2-b122-f90c95b26833] Running
	I0920 17:17:34.659062  115828 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [7ad2f538-3136-4678-8724-1d076b64ced0] Running
	I0920 17:17:34.659068  115828 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [bf9450d6-e953-4ecd-81f1-3db318e95f36] Running
	I0920 17:17:34.659076  115828 system_pods.go:89] "kube-proxy-w82jh" [eb27dfb4-acfc-4559-8809-bd2d259bd6c4] Running
	I0920 17:17:34.659081  115828 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [0cb38b70-334b-490a-a274-bf06cc229b3e] Running
	I0920 17:17:34.659086  115828 system_pods.go:89] "metrics-server-84c5f94fbc-x8j4s" [61012467-29a4-43b5-91a0-1bdfaa8a6bb1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 17:17:34.659089  115828 system_pods.go:89] "nvidia-device-plugin-daemonset-mjmjj" [31b9e06e-6478-4e30-b020-c8d37a2c816c] Running
	I0920 17:17:34.659095  115828 system_pods.go:89] "registry-66c9cd494c-2j6bv" [853bfd79-8cfc-4715-b415-cd985cf6274d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 17:17:34.659100  115828 system_pods.go:89] "registry-proxy-z6h9c" [13b5f103-a32f-4f30-ad42-68d95f995da9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 17:17:34.659108  115828 system_pods.go:89] "snapshot-controller-56fcc65765-2mhf9" [d5609f93-1bb4-4132-b88c-8da35c0e6d74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:17:34.659117  115828 system_pods.go:89] "snapshot-controller-56fcc65765-zk4nf" [29f1c218-ae8f-46d2-8ca8-184055340930] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:17:34.659121  115828 system_pods.go:89] "storage-provisioner" [2fc1d507-8a9b-46cb-97b2-89f5178b1a8b] Running
	I0920 17:17:34.659129  115828 system_pods.go:126] duration metric: took 231.427341ms to wait for k8s-apps to be running ...
	I0920 17:17:34.659138  115828 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 17:17:34.659189  115828 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:17:34.671219  115828 system_svc.go:56] duration metric: took 12.072538ms WaitForService to wait for kubelet
	I0920 17:17:34.671243  115828 kubeadm.go:582] duration metric: took 11.644559352s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:17:34.671262  115828 node_conditions.go:102] verifying NodePressure condition ...
	I0920 17:17:34.773272  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:34.827079  115828 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0920 17:17:34.827106  115828 node_conditions.go:123] node cpu capacity is 8
	I0920 17:17:34.827116  115828 node_conditions.go:105] duration metric: took 155.850265ms to run NodePressure ...
	I0920 17:17:34.827128  115828 start.go:241] waiting for startup goroutines ...
	I0920 17:17:35.053153  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:35.272396  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:35.553739  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:35.773322  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:36.052919  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:36.272753  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:36.553317  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:36.773979  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:37.052869  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:37.274274  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:37.552827  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:37.773744  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:38.124393  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:38.273506  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:38.553478  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:38.773765  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:39.053309  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:39.272814  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:39.552961  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:39.773203  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:40.053414  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:40.273463  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:40.553593  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:17:40.773147  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:41.053516  115828 kapi.go:107] duration metric: took 17.004188232s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 17:17:41.273497  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:41.773287  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:42.292300  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:42.773224  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:43.273832  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:43.772564  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:44.272771  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:44.773408  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:45.273071  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:45.774003  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:46.274273  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:46.772915  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:47.273372  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:47.773386  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:48.274336  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:48.772829  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:49.273853  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:49.772788  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:50.273271  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:50.773841  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:51.273768  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:51.773147  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:52.273205  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:52.772190  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:53.273700  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:53.772569  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:54.272448  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:54.772980  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:55.274112  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:55.773982  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:56.274106  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:56.773329  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:57.273322  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:57.773303  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:58.274112  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:58.772988  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:59.274392  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:17:59.773393  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:18:00.277151  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:18:00.772186  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:18:01.272790  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:18:01.774075  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:18:02.272514  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:18:02.773132  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:18:03.278084  115828 kapi.go:107] duration metric: took 37.009508751s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 17:18:12.523439  115828 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 17:18:12.523468  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:13.023601  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:13.522783  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:14.022643  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:14.522634  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:15.022471  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:15.522676  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:16.022915  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:16.522888  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:17.023251  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:17.523660  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:18.022954  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:18.523100  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:19.023011  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:19.523122  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:20.023517  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:20.523041  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:21.023692  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:21.523211  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:22.023190  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:22.523346  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:23.023234  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:23.522176  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:24.023892  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:24.522863  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:25.023163  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:25.522761  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:26.022945  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:26.522956  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:27.023233  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:27.523528  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:28.022996  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:28.522924  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:29.022982  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:29.523078  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:30.023423  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:30.522627  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:31.022597  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:31.522403  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:32.022604  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:32.522352  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:33.022275  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:33.522152  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:34.023842  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:34.522578  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:35.022879  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:35.523951  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:36.023235  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:36.523302  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:37.023780  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:37.523021  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:38.022983  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:38.523549  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:39.022434  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:39.522217  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:40.023907  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:40.523223  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:41.023480  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:41.522359  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:42.022829  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:42.522845  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:43.022704  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:43.522781  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:44.023181  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:44.523616  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:45.023367  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:45.522695  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:46.023057  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:46.522942  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:47.022855  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:47.523036  115828 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:18:48.022971  115828 kapi.go:107] duration metric: took 1m17.003525781s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 17:18:48.024710  115828 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0920 17:18:48.025976  115828 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 17:18:48.027230  115828 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 17:18:48.028605  115828 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, storage-provisioner, yakd, metrics-server, storage-provisioner-rancher, inspektor-gadget, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0920 17:18:48.029776  115828 addons.go:510] duration metric: took 1m25.006954057s for enable addons: enabled=[nvidia-device-plugin cloud-spanner default-storageclass storage-provisioner yakd metrics-server storage-provisioner-rancher inspektor-gadget volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0920 17:18:48.029816  115828 start.go:246] waiting for cluster config update ...
	I0920 17:18:48.029833  115828 start.go:255] writing updated cluster config ...
	I0920 17:18:48.030078  115828 exec_runner.go:51] Run: rm -f paused
	I0920 17:18:48.075988  115828 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 17:18:48.077732  115828 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Fri 2024-08-09 19:32:18 UTC, end at Fri 2024-09-20 17:28:39 UTC. --
	Sep 20 17:21:01 ubuntu-20-agent-2 dockerd[116044]: time="2024-09-20T17:21:01.629235320Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 20 17:21:01 ubuntu-20-agent-2 dockerd[116044]: time="2024-09-20T17:21:01.629234710Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 20 17:21:01 ubuntu-20-agent-2 dockerd[116044]: time="2024-09-20T17:21:01.631309276Z" level=error msg="Error running exec 31d676f8bbd7e6ad6170388d900778a32e19a853e6aa40f76399e721fc9d13f3 in container: OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown" spanID=e038923bc1b35273 traceID=7381c16ad4570f438a995603075a5d12
	Sep 20 17:21:01 ubuntu-20-agent-2 dockerd[116044]: time="2024-09-20T17:21:01.825318827Z" level=info msg="ignoring event" container=e19df3d3b181791ccc66289a65d96f50c8aebb312e6ff5c19f9937edffefb7b8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:22:24 ubuntu-20-agent-2 dockerd[116044]: time="2024-09-20T17:22:24.393832678Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=06ab5f12a606d1c5 traceID=d16bcdbf8e1f9f1de1062a3175c91b2b
	Sep 20 17:22:24 ubuntu-20-agent-2 dockerd[116044]: time="2024-09-20T17:22:24.396332357Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=06ab5f12a606d1c5 traceID=d16bcdbf8e1f9f1de1062a3175c91b2b
	Sep 20 17:23:52 ubuntu-20-agent-2 cri-dockerd[116373]: time="2024-09-20T17:23:52Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 20 17:23:53 ubuntu-20-agent-2 dockerd[116044]: time="2024-09-20T17:23:53.629475941Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 20 17:23:53 ubuntu-20-agent-2 dockerd[116044]: time="2024-09-20T17:23:53.629490805Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 20 17:23:53 ubuntu-20-agent-2 dockerd[116044]: time="2024-09-20T17:23:53.631427740Z" level=error msg="Error running exec 2a84ec6e3008715f7c392f9236d017bc42de25d74aed9c3088c1823f74093750 in container: OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown" spanID=6828e535151afcba traceID=1bf45860fbba892a88fd35826650942a
	Sep 20 17:23:53 ubuntu-20-agent-2 dockerd[116044]: time="2024-09-20T17:23:53.845385405Z" level=info msg="ignoring event" container=1a158fd8ba402da2af78436db3541439aaac7a431a759fe48c9b0bb907ac032c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:25:14 ubuntu-20-agent-2 dockerd[116044]: time="2024-09-20T17:25:14.398410640Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=e777d2d8ea7c7625 traceID=a2851d327f8e652cae91ef7e8716ce8e
	Sep 20 17:25:14 ubuntu-20-agent-2 dockerd[116044]: time="2024-09-20T17:25:14.400742605Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=e777d2d8ea7c7625 traceID=a2851d327f8e652cae91ef7e8716ce8e
	Sep 20 17:27:38 ubuntu-20-agent-2 cri-dockerd[116373]: time="2024-09-20T17:27:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e1c09a00f0dee58024c03919516a73701badbeceed9af01380e347f310cfc284/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 20 17:27:39 ubuntu-20-agent-2 dockerd[116044]: time="2024-09-20T17:27:39.069503142Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=b950e9aa33008838 traceID=7d05a2f7228f47e48c6eb5cc73040d5e
	Sep 20 17:27:39 ubuntu-20-agent-2 dockerd[116044]: time="2024-09-20T17:27:39.071970825Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=b950e9aa33008838 traceID=7d05a2f7228f47e48c6eb5cc73040d5e
	Sep 20 17:27:50 ubuntu-20-agent-2 dockerd[116044]: time="2024-09-20T17:27:50.395470040Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=dc0526d91dca8dc1 traceID=62b1d3415526d809115a0ff23b530c25
	Sep 20 17:27:50 ubuntu-20-agent-2 dockerd[116044]: time="2024-09-20T17:27:50.398574337Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=dc0526d91dca8dc1 traceID=62b1d3415526d809115a0ff23b530c25
	Sep 20 17:28:17 ubuntu-20-agent-2 dockerd[116044]: time="2024-09-20T17:28:17.401390133Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=6f8e50d3fce89038 traceID=b25cf3189973d1732112143b8e5121d9
	Sep 20 17:28:17 ubuntu-20-agent-2 dockerd[116044]: time="2024-09-20T17:28:17.403646819Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=6f8e50d3fce89038 traceID=b25cf3189973d1732112143b8e5121d9
	Sep 20 17:28:38 ubuntu-20-agent-2 dockerd[116044]: time="2024-09-20T17:28:38.551426968Z" level=info msg="ignoring event" container=e1c09a00f0dee58024c03919516a73701badbeceed9af01380e347f310cfc284 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:28:38 ubuntu-20-agent-2 dockerd[116044]: time="2024-09-20T17:28:38.808673241Z" level=info msg="ignoring event" container=1019076b967003a4c3a96afad4bad43fe4f343623ea1499c46d8125c65d802dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:28:38 ubuntu-20-agent-2 dockerd[116044]: time="2024-09-20T17:28:38.865253353Z" level=info msg="ignoring event" container=127ca19af4396e78a79b63397920d1f25ed377d456d3d67a5cbabe1e9fa40183 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:28:38 ubuntu-20-agent-2 dockerd[116044]: time="2024-09-20T17:28:38.949769453Z" level=info msg="ignoring event" container=0b7092ae9fafa713d8e535967248b84303bf5d595ba8878ffec3f49569307506 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:28:39 ubuntu-20-agent-2 dockerd[116044]: time="2024-09-20T17:28:39.019304791Z" level=info msg="ignoring event" container=a95279bf41b05ffba271f068b050df78e647c41a02e73be3ac45f768a9c4d860 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	1a158fd8ba402       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            4 minutes ago       Exited              gadget                                   6                   95c9bc363e629       gadget-jrcm5
	493e4f90669e7       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   dcbff8f55448c       gcp-auth-89d5ffd79-n69xm
	30108f3f0843e       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          10 minutes ago      Running             csi-snapshotter                          0                   44afa40650359       csi-hostpathplugin-gb6g5
	0c213b228d04c       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          10 minutes ago      Running             csi-provisioner                          0                   44afa40650359       csi-hostpathplugin-gb6g5
	7e3818edfa8de       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            10 minutes ago      Running             liveness-probe                           0                   44afa40650359       csi-hostpathplugin-gb6g5
	fa9002ea9e232       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           10 minutes ago      Running             hostpath                                 0                   44afa40650359       csi-hostpathplugin-gb6g5
	febf09874b2b5       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                10 minutes ago      Running             node-driver-registrar                    0                   44afa40650359       csi-hostpathplugin-gb6g5
	511028e7b30ca       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   10 minutes ago      Running             csi-external-health-monitor-controller   0                   44afa40650359       csi-hostpathplugin-gb6g5
	c1f7268f8602c       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              10 minutes ago      Running             csi-resizer                              0                   140a114589979       csi-hostpath-resizer-0
	85fd99d809aa2       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             10 minutes ago      Running             csi-attacher                             0                   6f30aa1ee21fe       csi-hostpath-attacher-0
	22d74d88db251       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   73b9e527f0e41       snapshot-controller-56fcc65765-2mhf9
	a466937fd3cb3       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   7ba7722dc6276       snapshot-controller-56fcc65765-zk4nf
	606dda0f537eb       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       10 minutes ago      Running             local-path-provisioner                   0                   525e2a0427161       local-path-provisioner-86d989889c-w9wbb
	8d53b7c969d22       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        10 minutes ago      Running             yakd                                     0                   fc908a2118de2       yakd-dashboard-67d98fc6b-ttnzk
	00209ba1148d6       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        10 minutes ago      Running             metrics-server                           0                   c40e27385214b       metrics-server-84c5f94fbc-x8j4s
	127ca19af4396       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              11 minutes ago      Exited              registry-proxy                           0                   a95279bf41b05       registry-proxy-z6h9c
	1019076b96700       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                                             11 minutes ago      Exited              registry                                 0                   0b7092ae9fafa       registry-66c9cd494c-2j6bv
	2a30e243e7572       gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e                               11 minutes ago      Running             cloud-spanner-emulator                   0                   723a1d7bd407a       cloud-spanner-emulator-5b584cc74-4w5v6
	6b87f5f5a8b91       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     11 minutes ago      Running             nvidia-device-plugin-ctr                 0                   3aa108170f44f       nvidia-device-plugin-daemonset-mjmjj
	d1c07849f8de9       6e38f40d628db                                                                                                                                11 minutes ago      Running             storage-provisioner                      0                   d5e8b31ddc3ec       storage-provisioner
	3e2bf08ccdf9b       c69fa2e9cbf5f                                                                                                                                11 minutes ago      Running             coredns                                  0                   a07e6cc30b56d       coredns-7c65d6cfc9-fsp56
	2135bf4b94340       60c005f310ff3                                                                                                                                11 minutes ago      Running             kube-proxy                               0                   5ed3b4ecabf5b       kube-proxy-w82jh
	c14f9ee625a49       6bab7719df100                                                                                                                                11 minutes ago      Running             kube-apiserver                           0                   52a024fa29377       kube-apiserver-ubuntu-20-agent-2
	3f4eb63e5a85c       9aa1fad941575                                                                                                                                11 minutes ago      Running             kube-scheduler                           0                   c9e6a326f9eff       kube-scheduler-ubuntu-20-agent-2
	5c1bfa3f5a02c       2e96e5913fc06                                                                                                                                11 minutes ago      Running             etcd                                     0                   b218f5eb1c3c4       etcd-ubuntu-20-agent-2
	96acd7f5e6309       175ffd71cce3d                                                                                                                                11 minutes ago      Running             kube-controller-manager                  0                   b2f22c67c1f43       kube-controller-manager-ubuntu-20-agent-2
	
	
	==> coredns [3e2bf08ccdf9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	[INFO] Reloading complete
	[INFO] 127.0.0.1:50508 - 50902 "HINFO IN 1925215237878309941.7881556200531506840. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01776491s
	[INFO] 10.244.0.24:43657 - 62613 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000340078s
	[INFO] 10.244.0.24:32863 - 49864 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000501654s
	[INFO] 10.244.0.24:58711 - 57143 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000129227s
	[INFO] 10.244.0.24:44590 - 43495 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124345s
	[INFO] 10.244.0.24:37675 - 61441 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000130344s
	[INFO] 10.244.0.24:35502 - 10320 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000151209s
	[INFO] 10.244.0.24:47845 - 47449 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003980558s
	[INFO] 10.244.0.24:37168 - 55510 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004591715s
	[INFO] 10.244.0.24:41887 - 1748 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003425385s
	[INFO] 10.244.0.24:43399 - 56372 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003611724s
	[INFO] 10.244.0.24:47102 - 20752 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003231009s
	[INFO] 10.244.0.24:57209 - 7536 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003328009s
	[INFO] 10.244.0.24:42707 - 40382 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001272769s
	[INFO] 10.244.0.24:59991 - 32919 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.002668377s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=35d0eeb96573bd708dfd5c070da844e6f0fad78a
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T17_17_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-2
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-2"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:17:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 17:28:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:24:26 +0000   Fri, 20 Sep 2024 17:17:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:24:26 +0000   Fri, 20 Sep 2024 17:17:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:24:26 +0000   Fri, 20 Sep 2024 17:17:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:24:26 +0000   Fri, 20 Sep 2024 17:17:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    0fd695e7-50c5-4838-9acc-b2d1cdaf04a4
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  default                     cloud-spanner-emulator-5b584cc74-4w5v6       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-jrcm5                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-n69xm                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-fsp56                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-gb6g5                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-w82jh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-x8j4s              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-mjmjj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-2mhf9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-zk4nf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-w9wbb      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-ttnzk               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 11m   kube-proxy       
	  Normal   Starting                 11m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  11m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m   kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m   kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m   kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m   node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	
	
	==> dmesg <==
	[  +0.000014] ll header: 00000000: ff ff ff ff ff ff 7a e1 80 ce b5 0b 08 06
	[  +0.021646] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 42 ce 6e 2e b5 c4 08 06
	[  +2.646956] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff aa 9e 77 63 09 10 08 06
	[  +1.700255] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 07 f7 33 c4 67 08 06
	[  +2.160430] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 4a 9f 0f a8 1f 31 08 06
	[  +4.069180] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 06 80 c8 06 51 08 06
	[  +0.671747] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 5a 61 a0 65 c5 08 06
	[  +0.154124] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a b5 e7 70 b7 b2 08 06
	[  +0.094077] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e 4c 79 79 e2 83 08 06
	[Sep20 17:18] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 15 91 01 14 a4 08 06
	[  +0.036797] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 56 f3 39 08 02 08 06
	[ +12.078577] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d2 98 e0 55 8a 80 08 06
	[  +0.000508] IPv4: martian source 10.244.0.24 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 4a 5b 33 c2 ab 07 08 06
	
	
	==> etcd [5c1bfa3f5a02] <==
	{"level":"info","ts":"2024-09-20T17:17:14.095232Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-20T17:17:14.683397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-20T17:17:14.683446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-20T17:17:14.683461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgPreVoteResp from 6b435b960bec7c3c at term 1"}
	{"level":"info","ts":"2024-09-20T17:17:14.683473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became candidate at term 2"}
	{"level":"info","ts":"2024-09-20T17:17:14.683478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-20T17:17:14.683486Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 2"}
	{"level":"info","ts":"2024-09-20T17:17:14.683494Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-20T17:17:14.684402Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:17:14.685134Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T17:17:14.685130Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T17:17:14.685165Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T17:17:14.685397Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:17:14.685501Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:17:14.685532Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:17:14.685545Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T17:17:14.685613Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T17:17:14.686289Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T17:17:14.686310Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T17:17:14.687121Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-20T17:17:14.687451Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-09-20T17:17:42.289630Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.772516ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8952190758603319724 > lease_revoke:<id:7c3c92107015e847>","response":"size:27"}
	{"level":"info","ts":"2024-09-20T17:27:14.705292Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1682}
	{"level":"info","ts":"2024-09-20T17:27:14.728871Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1682,"took":"23.091082ms","hash":337044555,"current-db-size-bytes":8007680,"current-db-size":"8.0 MB","current-db-size-in-use-bytes":4329472,"current-db-size-in-use":"4.3 MB"}
	{"level":"info","ts":"2024-09-20T17:27:14.728916Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":337044555,"revision":1682,"compact-revision":-1}
	
	
	==> gcp-auth [493e4f90669e] <==
	2024/09/20 17:18:47 GCP Auth Webhook started!
	2024/09/20 17:19:03 Ready to marshal response ...
	2024/09/20 17:19:03 Ready to write response ...
	2024/09/20 17:19:03 Ready to marshal response ...
	2024/09/20 17:19:03 Ready to write response ...
	2024/09/20 17:19:26 Ready to marshal response ...
	2024/09/20 17:19:26 Ready to write response ...
	2024/09/20 17:19:26 Ready to marshal response ...
	2024/09/20 17:19:26 Ready to write response ...
	2024/09/20 17:19:26 Ready to marshal response ...
	2024/09/20 17:19:26 Ready to write response ...
	2024/09/20 17:27:38 Ready to marshal response ...
	2024/09/20 17:27:38 Ready to write response ...
	
	
	==> kernel <==
	 17:28:39 up  1:11,  0 users,  load average: 0.10, 0.35, 0.63
	Linux ubuntu-20-agent-2 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [c14f9ee625a4] <==
	W0920 17:18:05.197084       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.38.66:443: connect: connection refused
	W0920 17:18:12.014658       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.77.210:443: connect: connection refused
	E0920 17:18:12.014701       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.77.210:443: connect: connection refused" logger="UnhandledError"
	W0920 17:18:34.041182       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.77.210:443: connect: connection refused
	E0920 17:18:34.041219       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.77.210:443: connect: connection refused" logger="UnhandledError"
	W0920 17:18:34.062220       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.77.210:443: connect: connection refused
	E0920 17:18:34.062261       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.77.210:443: connect: connection refused" logger="UnhandledError"
	I0920 17:19:03.317688       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0920 17:19:03.336389       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0920 17:19:16.707979       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0920 17:19:16.718433       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0920 17:19:16.832778       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0920 17:19:16.836748       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0920 17:19:16.850226       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0920 17:19:16.930936       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0920 17:19:17.020235       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0920 17:19:17.065098       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0920 17:19:17.095499       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0920 17:19:17.845589       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0920 17:19:17.905083       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0920 17:19:17.931887       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0920 17:19:17.961266       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0920 17:19:17.980484       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0920 17:19:18.095966       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0920 17:19:18.249272       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	
	
	==> kube-controller-manager [96acd7f5e630] <==
	W0920 17:27:31.078701       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:27:31.078742       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:27:35.511599       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:27:35.511648       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:27:44.454280       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:27:44.454327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:27:47.514107       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:27:47.514158       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:27:55.815816       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:27:55.815861       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:28:01.783757       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:28:01.783802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:28:02.211520       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:28:02.211564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:28:08.336785       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:28:08.336830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:28:11.654895       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:28:11.654941       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:28:31.169123       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:28:31.169165       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:28:34.207622       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:28:34.207665       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 17:28:38.763077       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="10.173µs"
	W0920 17:28:39.507373       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:28:39.507414       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [2135bf4b9434] <==
	I0920 17:17:24.611385       1 server_linux.go:66] "Using iptables proxy"
	I0920 17:17:24.778641       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0920 17:17:24.778806       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 17:17:24.894445       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0920 17:17:24.894518       1 server_linux.go:169] "Using iptables Proxier"
	I0920 17:17:24.920345       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 17:17:24.921017       1 server.go:483] "Version info" version="v1.31.1"
	I0920 17:17:24.921057       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:17:24.930031       1 config.go:199] "Starting service config controller"
	I0920 17:17:24.930064       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 17:17:24.930107       1 config.go:105] "Starting endpoint slice config controller"
	I0920 17:17:24.930113       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 17:17:24.930987       1 config.go:328] "Starting node config controller"
	I0920 17:17:24.931000       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 17:17:25.031293       1 shared_informer.go:320] Caches are synced for service config
	I0920 17:17:25.031362       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 17:17:25.031669       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3f4eb63e5a85] <==
	W0920 17:17:15.607400       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0920 17:17:15.607227       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 17:17:15.607423       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0920 17:17:15.607423       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0920 17:17:15.607289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0920 17:17:15.607398       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 17:17:15.607308       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 17:17:15.607490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:17:16.418666       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 17:17:16.418707       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 17:17:16.454108       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 17:17:16.454146       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:17:16.557028       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 17:17:16.557078       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:17:16.654040       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 17:17:16.654086       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:17:16.679453       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 17:17:16.679500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 17:17:16.739096       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 17:17:16.739149       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:17:16.788494       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 17:17:16.788542       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:17:16.830250       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 17:17:16.830296       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 17:17:19.604046       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Fri 2024-08-09 19:32:18 UTC, end at Fri 2024-09-20 17:28:39 UTC. --
	Sep 20 17:28:04 ubuntu-20-agent-2 kubelet[117266]: E0920 17:28:04.247759  117266 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="df359680-544e-4c84-ab20-baba733486a6"
	Sep 20 17:28:07 ubuntu-20-agent-2 kubelet[117266]: I0920 17:28:07.245354  117266 scope.go:117] "RemoveContainer" containerID="1a158fd8ba402da2af78436db3541439aaac7a431a759fe48c9b0bb907ac032c"
	Sep 20 17:28:07 ubuntu-20-agent-2 kubelet[117266]: E0920 17:28:07.245531  117266 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-jrcm5_gadget(fe89d6f4-13fa-477f-8f04-00c0387811bb)\"" pod="gadget/gadget-jrcm5" podUID="fe89d6f4-13fa-477f-8f04-00c0387811bb"
	Sep 20 17:28:14 ubuntu-20-agent-2 kubelet[117266]: E0920 17:28:14.247353  117266 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="fd3e517b-00be-4510-87e8-46488dc0ded9"
	Sep 20 17:28:17 ubuntu-20-agent-2 kubelet[117266]: E0920 17:28:17.404156  117266 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" image="gcr.io/k8s-minikube/busybox:latest"
	Sep 20 17:28:17 ubuntu-20-agent-2 kubelet[117266]: E0920 17:28:17.404346  117266 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-test,Image:gcr.io/k8s-minikube/busybox,Command:[],Args:[sh -c wget --spider -S http://registry.kube-system.svc.cluster.local],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hfz5k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:true,StdinOnce:true,TTY:true,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod registry-test_default(df359680-544e-4c84-ab20-baba733486a6): ErrImagePull: Error response from daemon: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" logger="UnhandledError"
	Sep 20 17:28:17 ubuntu-20-agent-2 kubelet[117266]: E0920 17:28:17.405534  117266 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ErrImagePull: \"Error response from daemon: Head \\\"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\\\": unauthorized: authentication failed\"" pod="default/registry-test" podUID="df359680-544e-4c84-ab20-baba733486a6"
	Sep 20 17:28:19 ubuntu-20-agent-2 kubelet[117266]: I0920 17:28:19.246028  117266 scope.go:117] "RemoveContainer" containerID="1a158fd8ba402da2af78436db3541439aaac7a431a759fe48c9b0bb907ac032c"
	Sep 20 17:28:19 ubuntu-20-agent-2 kubelet[117266]: E0920 17:28:19.246217  117266 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-jrcm5_gadget(fe89d6f4-13fa-477f-8f04-00c0387811bb)\"" pod="gadget/gadget-jrcm5" podUID="fe89d6f4-13fa-477f-8f04-00c0387811bb"
	Sep 20 17:28:29 ubuntu-20-agent-2 kubelet[117266]: E0920 17:28:29.247982  117266 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="fd3e517b-00be-4510-87e8-46488dc0ded9"
	Sep 20 17:28:29 ubuntu-20-agent-2 kubelet[117266]: E0920 17:28:29.248086  117266 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="df359680-544e-4c84-ab20-baba733486a6"
	Sep 20 17:28:31 ubuntu-20-agent-2 kubelet[117266]: I0920 17:28:31.245929  117266 scope.go:117] "RemoveContainer" containerID="1a158fd8ba402da2af78436db3541439aaac7a431a759fe48c9b0bb907ac032c"
	Sep 20 17:28:31 ubuntu-20-agent-2 kubelet[117266]: E0920 17:28:31.246120  117266 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-jrcm5_gadget(fe89d6f4-13fa-477f-8f04-00c0387811bb)\"" pod="gadget/gadget-jrcm5" podUID="fe89d6f4-13fa-477f-8f04-00c0387811bb"
	Sep 20 17:28:38 ubuntu-20-agent-2 kubelet[117266]: I0920 17:28:38.689831  117266 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfz5k\" (UniqueName: \"kubernetes.io/projected/df359680-544e-4c84-ab20-baba733486a6-kube-api-access-hfz5k\") pod \"df359680-544e-4c84-ab20-baba733486a6\" (UID: \"df359680-544e-4c84-ab20-baba733486a6\") "
	Sep 20 17:28:38 ubuntu-20-agent-2 kubelet[117266]: I0920 17:28:38.689886  117266 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/df359680-544e-4c84-ab20-baba733486a6-gcp-creds\") pod \"df359680-544e-4c84-ab20-baba733486a6\" (UID: \"df359680-544e-4c84-ab20-baba733486a6\") "
	Sep 20 17:28:38 ubuntu-20-agent-2 kubelet[117266]: I0920 17:28:38.689951  117266 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df359680-544e-4c84-ab20-baba733486a6-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "df359680-544e-4c84-ab20-baba733486a6" (UID: "df359680-544e-4c84-ab20-baba733486a6"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 20 17:28:38 ubuntu-20-agent-2 kubelet[117266]: I0920 17:28:38.691694  117266 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df359680-544e-4c84-ab20-baba733486a6-kube-api-access-hfz5k" (OuterVolumeSpecName: "kube-api-access-hfz5k") pod "df359680-544e-4c84-ab20-baba733486a6" (UID: "df359680-544e-4c84-ab20-baba733486a6"). InnerVolumeSpecName "kube-api-access-hfz5k". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 17:28:38 ubuntu-20-agent-2 kubelet[117266]: I0920 17:28:38.791306  117266 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/df359680-544e-4c84-ab20-baba733486a6-gcp-creds\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 20 17:28:38 ubuntu-20-agent-2 kubelet[117266]: I0920 17:28:38.791342  117266 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hfz5k\" (UniqueName: \"kubernetes.io/projected/df359680-544e-4c84-ab20-baba733486a6-kube-api-access-hfz5k\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 20 17:28:39 ubuntu-20-agent-2 kubelet[117266]: I0920 17:28:39.093028  117266 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwr8s\" (UniqueName: \"kubernetes.io/projected/853bfd79-8cfc-4715-b415-cd985cf6274d-kube-api-access-lwr8s\") pod \"853bfd79-8cfc-4715-b415-cd985cf6274d\" (UID: \"853bfd79-8cfc-4715-b415-cd985cf6274d\") "
	Sep 20 17:28:39 ubuntu-20-agent-2 kubelet[117266]: I0920 17:28:39.099427  117266 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/853bfd79-8cfc-4715-b415-cd985cf6274d-kube-api-access-lwr8s" (OuterVolumeSpecName: "kube-api-access-lwr8s") pod "853bfd79-8cfc-4715-b415-cd985cf6274d" (UID: "853bfd79-8cfc-4715-b415-cd985cf6274d"). InnerVolumeSpecName "kube-api-access-lwr8s". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 17:28:39 ubuntu-20-agent-2 kubelet[117266]: I0920 17:28:39.193740  117266 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4r4w\" (UniqueName: \"kubernetes.io/projected/13b5f103-a32f-4f30-ad42-68d95f995da9-kube-api-access-h4r4w\") pod \"13b5f103-a32f-4f30-ad42-68d95f995da9\" (UID: \"13b5f103-a32f-4f30-ad42-68d95f995da9\") "
	Sep 20 17:28:39 ubuntu-20-agent-2 kubelet[117266]: I0920 17:28:39.193839  117266 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-lwr8s\" (UniqueName: \"kubernetes.io/projected/853bfd79-8cfc-4715-b415-cd985cf6274d-kube-api-access-lwr8s\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 20 17:28:39 ubuntu-20-agent-2 kubelet[117266]: I0920 17:28:39.195797  117266 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13b5f103-a32f-4f30-ad42-68d95f995da9-kube-api-access-h4r4w" (OuterVolumeSpecName: "kube-api-access-h4r4w") pod "13b5f103-a32f-4f30-ad42-68d95f995da9" (UID: "13b5f103-a32f-4f30-ad42-68d95f995da9"). InnerVolumeSpecName "kube-api-access-h4r4w". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 17:28:39 ubuntu-20-agent-2 kubelet[117266]: I0920 17:28:39.294543  117266 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-h4r4w\" (UniqueName: \"kubernetes.io/projected/13b5f103-a32f-4f30-ad42-68d95f995da9-kube-api-access-h4r4w\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	
	
	==> storage-provisioner [d1c07849f8de] <==
	I0920 17:17:25.402728       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 17:17:25.414202       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 17:17:25.414243       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 17:17:25.424776       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 17:17:25.425004       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_77f5c219-8c99-4525-81e2-76afb874482d!
	I0920 17:17:25.426632       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"382a32cf-58e3-44d0-94e4-d5bf8fad95f9", APIVersion:"v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_77f5c219-8c99-4525-81e2-76afb874482d became leader
	I0920 17:17:25.525381       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_77f5c219-8c99-4525-81e2-76afb874482d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ubuntu-20-agent-2/10.138.0.48
	Start Time:       Fri, 20 Sep 2024 17:19:26 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t4dhs (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-t4dhs:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m14s                  default-scheduler  Successfully assigned default/busybox to ubuntu-20-agent-2
	  Normal   Pulling    7m41s (x4 over 9m13s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m41s (x4 over 9m13s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m41s (x4 over 9m13s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m27s (x6 over 9m13s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m1s (x20 over 9m13s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.79s)

                                                
                                    

Test pass (110/167)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 1.28
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 0.97
15 TestDownloadOnly/v1.31.1/binaries 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.54
22 TestOffline 40.64
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 102.6
29 TestAddons/serial/Volcano 38.4
31 TestAddons/serial/GCPAuth/Namespaces 0.11
35 TestAddons/parallel/InspektorGadget 10.45
36 TestAddons/parallel/MetricsServer 5.37
38 TestAddons/parallel/CSI 47.57
39 TestAddons/parallel/Headlamp 16.99
40 TestAddons/parallel/CloudSpanner 5.26
42 TestAddons/parallel/NvidiaDevicePlugin 5.23
43 TestAddons/parallel/Yakd 10.41
44 TestAddons/StoppedEnableDisable 10.7
46 TestCertExpiration 227.79
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 25.38
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 30.13
61 TestFunctional/serial/KubeContext 0.04
62 TestFunctional/serial/KubectlGetPods 0.06
64 TestFunctional/serial/MinikubeKubectlCmd 0.11
65 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
66 TestFunctional/serial/ExtraConfig 37.7
67 TestFunctional/serial/ComponentHealth 0.06
68 TestFunctional/serial/LogsCmd 0.79
69 TestFunctional/serial/LogsFileCmd 0.82
70 TestFunctional/serial/InvalidService 4.12
72 TestFunctional/parallel/ConfigCmd 0.26
73 TestFunctional/parallel/DashboardCmd 6.44
74 TestFunctional/parallel/DryRun 0.16
75 TestFunctional/parallel/InternationalLanguage 0.08
76 TestFunctional/parallel/StatusCmd 0.41
79 TestFunctional/parallel/ProfileCmd/profile_not_create 0.21
80 TestFunctional/parallel/ProfileCmd/profile_list 0.2
81 TestFunctional/parallel/ProfileCmd/profile_json_output 0.19
83 TestFunctional/parallel/ServiceCmd/DeployApp 10.14
84 TestFunctional/parallel/ServiceCmd/List 0.33
85 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
86 TestFunctional/parallel/ServiceCmd/HTTPS 0.15
87 TestFunctional/parallel/ServiceCmd/Format 0.14
88 TestFunctional/parallel/ServiceCmd/URL 0.15
89 TestFunctional/parallel/ServiceCmdConnect 7.3
90 TestFunctional/parallel/AddonsCmd 0.11
91 TestFunctional/parallel/PersistentVolumeClaim 22.04
94 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.26
95 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
97 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.18
98 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
99 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
103 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
106 TestFunctional/parallel/MySQL 20.41
110 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
111 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 14.37
112 TestFunctional/parallel/UpdateContextCmd/no_clusters 14.47
115 TestFunctional/parallel/NodeLabels 0.06
119 TestFunctional/parallel/Version/short 0.04
120 TestFunctional/parallel/Version/components 0.23
121 TestFunctional/parallel/License 0.21
122 TestFunctional/delete_echo-server_images 0.03
123 TestFunctional/delete_my-image_image 0.01
124 TestFunctional/delete_minikube_cached_images 0.01
129 TestImageBuild/serial/Setup 13.23
130 TestImageBuild/serial/NormalBuild 1.55
131 TestImageBuild/serial/BuildWithBuildArg 0.77
132 TestImageBuild/serial/BuildWithDockerIgnore 0.56
133 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.55
137 TestJSONOutput/start/Command 28.57
138 TestJSONOutput/start/Audit 0
140 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
141 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
143 TestJSONOutput/pause/Command 0.46
144 TestJSONOutput/pause/Audit 0
146 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
147 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
149 TestJSONOutput/unpause/Command 0.4
150 TestJSONOutput/unpause/Audit 0
152 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
155 TestJSONOutput/stop/Command 10.42
156 TestJSONOutput/stop/Audit 0
158 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
160 TestErrorJSONOutput 0.19
165 TestMainNoArgs 0.05
166 TestMinikubeProfile 34.11
174 TestPause/serial/Start 24.97
175 TestPause/serial/SecondStartNoReconfiguration 30.07
176 TestPause/serial/Pause 0.5
177 TestPause/serial/VerifyStatus 0.13
178 TestPause/serial/Unpause 0.41
179 TestPause/serial/PauseAgain 0.54
180 TestPause/serial/DeletePaused 1.95
181 TestPause/serial/VerifyDeletedResources 0.06
195 TestRunningBinaryUpgrade 69.29
197 TestStoppedBinaryUpgrade/Setup 0.56
198 TestStoppedBinaryUpgrade/Upgrade 50.32
199 TestStoppedBinaryUpgrade/MinikubeLogs 0.79
200 TestKubernetesUpgrade 314.71
x
+
TestDownloadOnly/v1.20.0/json-events (1.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (1.281200579s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (1.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (57.355758ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 20 Sep 24 17:16 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:16:21
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:16:21.158978  111970 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:16:21.159212  111970 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:16:21.159221  111970 out.go:358] Setting ErrFile to fd 2...
	I0920 17:16:21.159225  111970 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:16:21.159395  111970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-105157/.minikube/bin
	W0920 17:16:21.159530  111970 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19679-105157/.minikube/config/config.json: open /home/jenkins/minikube-integration/19679-105157/.minikube/config/config.json: no such file or directory
	I0920 17:16:21.160085  111970 out.go:352] Setting JSON to true
	I0920 17:16:21.160980  111970 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3533,"bootTime":1726849048,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:16:21.161096  111970 start.go:139] virtualization: kvm guest
	I0920 17:16:21.163299  111970 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0920 17:16:21.163416  111970 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19679-105157/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 17:16:21.163459  111970 notify.go:220] Checking for updates...
	I0920 17:16:21.164812  111970 out.go:169] MINIKUBE_LOCATION=19679
	I0920 17:16:21.166315  111970 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:16:21.167896  111970 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19679-105157/kubeconfig
	I0920 17:16:21.169272  111970 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-105157/.minikube
	I0920 17:16:21.170777  111970 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (0.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm
--- PASS: TestDownloadOnly/v1.31.1/json-events (0.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
--- PASS: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (56.091938ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 20 Sep 24 17:16 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.34.0 | 20 Sep 24 17:16 UTC | 20 Sep 24 17:16 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 20 Sep 24 17:16 UTC | 20 Sep 24 17:16 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 20 Sep 24 17:16 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:16:22
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:16:22.732812  112122 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:16:22.733069  112122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:16:22.733079  112122 out.go:358] Setting ErrFile to fd 2...
	I0920 17:16:22.733084  112122 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:16:22.733265  112122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-105157/.minikube/bin
	I0920 17:16:22.733789  112122 out.go:352] Setting JSON to true
	I0920 17:16:22.734631  112122 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":3535,"bootTime":1726849048,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:16:22.734727  112122 start.go:139] virtualization: kvm guest
	I0920 17:16:22.736837  112122 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0920 17:16:22.736966  112122 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19679-105157/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 17:16:22.737051  112122 notify.go:220] Checking for updates...
	I0920 17:16:22.738412  112122 out.go:169] MINIKUBE_LOCATION=19679
	I0920 17:16:22.739740  112122 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:16:22.741100  112122 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19679-105157/kubeconfig
	I0920 17:16:22.742294  112122 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-105157/.minikube
	I0920 17:16:22.743474  112122 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
I0920 17:16:24.203427  111958 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:44693 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (40.64s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (39.037268706s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.601168358s)
--- PASS: TestOffline (40.64s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (47.136622ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (48.228786ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (102.6s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm: (1m42.604207198s)
--- PASS: TestAddons/Setup (102.60s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.4s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 9.171719ms
addons_test.go:843: volcano-admission stabilized in 9.699622ms
addons_test.go:835: volcano-scheduler stabilized in 9.745435ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-9rz42" [f5a09e45-d46b-42d2-ada8-dac72c52bfe9] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003611793s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-6g7qh" [12c4d9e4-ae0f-46e0-bd58-9584a13dfdff] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003526747s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-dc4kg" [51fdc9f6-305f-428b-8fcd-5c14f00f2acb] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00285236s
addons_test.go:870: (dbg) Run:  kubectl --context minikube delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context minikube create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [ca965359-7555-4d4a-a2a7-a435d9aabeb5] Pending
helpers_test.go:344: "test-job-nginx-0" [ca965359-7555-4d4a-a2a7-a435d9aabeb5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [ca965359-7555-4d4a-a2a7-a435d9aabeb5] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.00333043s
addons_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.098086537s)
--- PASS: TestAddons/serial/Volcano (38.40s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.45s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jrcm5" [fe89d6f4-13fa-477f-8f04-00c0387811bb] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003910108s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube: (5.443377188s)
--- PASS: TestAddons/parallel/InspektorGadget (10.45s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.37s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.053733ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-x8j4s" [61012467-29a4-43b5-91a0-1bdfaa8a6bb1] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004170668s
addons_test.go:413: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.37s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
I0920 17:28:55.891085  111958 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 17:28:55.895236  111958 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 17:28:55.895259  111958 kapi.go:107] duration metric: took 4.197314ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 4.206739ms
addons_test.go:508: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [aecf851d-6fd3-41ec-985c-c027c196e2a9] Pending
helpers_test.go:344: "task-pv-pod" [aecf851d-6fd3-41ec-985c-c027c196e2a9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [aecf851d-6fd3-41ec-985c-c027c196e2a9] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003847677s
addons_test.go:528: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context minikube delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [86f9f2a2-612e-46c8-9e00-6945ad1ca20a] Pending
helpers_test.go:344: "task-pv-pod-restore" [86f9f2a2-612e-46c8-9e00-6945ad1ca20a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [86f9f2a2-612e-46c8-9e00-6945ad1ca20a] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003457009s
addons_test.go:570: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context minikube delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context minikube delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.273398905s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (47.57s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-sr9gp" [62ce804f-0cf3-4c3c-862c-19ced586ae34] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-sr9gp" [62ce804f-0cf3-4c3c-862c-19ced586ae34] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-sr9gp" [62ce804f-0cf3-4c3c-862c-19ced586ae34] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003488361s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1: (5.48818543s)
--- PASS: TestAddons/parallel/Headlamp (16.99s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-4w5v6" [9bf64223-a530-4ef4-bb6c-c96f821e0d1a] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002982621s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.23s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-mjmjj" [31b9e06e-6478-4e30-b020-c8d37a2c816c] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003992025s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p minikube
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.23s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-ttnzk" [0c3b19fd-50c5-48f3-9734-9ff44b09adb1] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003759249s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.409910719s)
--- PASS: TestAddons/parallel/Yakd (10.41s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.7s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.379989381s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.70s)

                                                
                                    
x
+
TestCertExpiration (227.79s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (13.343350202s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (32.749485131s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.691346567s)
--- PASS: TestCertExpiration (227.79s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19679-105157/.minikube/files/etc/test/nested/copy/111958/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (25.38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (25.376150936s)
--- PASS: TestFunctional/serial/StartWithProxy (25.38s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (30.13s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0920 17:34:46.374910  111958 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (30.127376293s)
functional_test.go:663: soft start took 30.128038128s for "minikube" cluster.
I0920 17:35:16.502649  111958 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (30.13s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.7s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.703486935s)
functional_test.go:761: restart took 37.703597533s for "minikube" cluster.
I0920 17:35:54.526234  111958 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (37.70s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.79s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd1878832626/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.82s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.12s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (154.438082ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://10.138.0.48:31403 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.12s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (41.615616ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (41.833656ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2024/09/20 17:36:06 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 146925: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.44s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (79.271134ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-105157/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-105157/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:36:07.054335  147299 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:36:07.054611  147299 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:36:07.054620  147299 out.go:358] Setting ErrFile to fd 2...
	I0920 17:36:07.054625  147299 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:36:07.054807  147299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-105157/.minikube/bin
	I0920 17:36:07.055340  147299 out.go:352] Setting JSON to false
	I0920 17:36:07.056355  147299 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4719,"bootTime":1726849048,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:36:07.056454  147299 start.go:139] virtualization: kvm guest
	I0920 17:36:07.058835  147299 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0920 17:36:07.060239  147299 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19679-105157/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 17:36:07.060269  147299 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 17:36:07.060278  147299 notify.go:220] Checking for updates...
	I0920 17:36:07.061657  147299 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:36:07.063107  147299 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-105157/kubeconfig
	I0920 17:36:07.064720  147299 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-105157/.minikube
	I0920 17:36:07.066435  147299 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:36:07.068006  147299 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:36:07.069669  147299 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:36:07.069958  147299 exec_runner.go:51] Run: systemctl --version
	I0920 17:36:07.072418  147299 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:36:07.083159  147299 out.go:177] * Using the none driver based on existing profile
	I0920 17:36:07.084480  147299 start.go:297] selected driver: none
	I0920 17:36:07.084497  147299 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:36:07.084615  147299 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:36:07.084638  147299 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0920 17:36:07.084997  147299 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0920 17:36:07.087186  147299 out.go:201] 
	W0920 17:36:07.088465  147299 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 17:36:07.089667  147299 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (78.397776ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-105157/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-105157/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 17:36:07.211693  147329 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:36:07.211941  147329 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:36:07.211950  147329 out.go:358] Setting ErrFile to fd 2...
	I0920 17:36:07.211963  147329 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:36:07.212241  147329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19679-105157/.minikube/bin
	I0920 17:36:07.212807  147329 out.go:352] Setting JSON to false
	I0920 17:36:07.213846  147329 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":4719,"bootTime":1726849048,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:36:07.213937  147329 start.go:139] virtualization: kvm guest
	I0920 17:36:07.215867  147329 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	W0920 17:36:07.217199  147329 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19679-105157/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 17:36:07.217221  147329 out.go:177]   - MINIKUBE_LOCATION=19679
	I0920 17:36:07.217272  147329 notify.go:220] Checking for updates...
	I0920 17:36:07.219761  147329 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:36:07.221236  147329 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19679-105157/kubeconfig
	I0920 17:36:07.222525  147329 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-105157/.minikube
	I0920 17:36:07.223616  147329 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:36:07.224781  147329 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:36:07.226185  147329 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:36:07.226517  147329 exec_runner.go:51] Run: systemctl --version
	I0920 17:36:07.228949  147329 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:36:07.238654  147329 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0920 17:36:07.239851  147329 start.go:297] selected driver: none
	I0920 17:36:07.239868  147329 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:36:07.239973  147329 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:36:07.239995  147329 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0920 17:36:07.240293  147329 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0920 17:36:07.242656  147329 out.go:201] 
	W0920 17:36:07.243964  147329 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 17:36:07.245141  147329 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "152.611537ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "43.473861ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "150.193466ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "44.240385ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-8dm5l" [b5518035-fc5e-4811-81af-6ce214767aab] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-8dm5l" [b5518035-fc5e-4811-81af-6ce214767aab] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003217462s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1494: Took "328.518876ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://10.138.0.48:31343
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://10.138.0.48:31343
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-27xb7" [1522c844-0eba-43a5-807d-a2b2a9f11e7d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-27xb7" [1522c844-0eba-43a5-807d-a2b2a9f11e7d] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003078317s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://10.138.0.48:32244
functional_test.go:1675: http://10.138.0.48:32244: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-27xb7

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.138.0.48:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.138.0.48:32244
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.30s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (22.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3d019fca-2671-48a3-bc39-cfd4c67eca24] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00372419s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9b5775e0-f675-4d46-aad6-15577cadf2f0] Pending
helpers_test.go:344: "sp-pod" [9b5775e0-f675-4d46-aad6-15577cadf2f0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9b5775e0-f675-4d46-aad6-15577cadf2f0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003988926s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml: (1.330591391s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0525f43e-0c56-493d-b6b9-b2d7d06735cd] Pending
helpers_test.go:344: "sp-pod" [0525f43e-0c56-493d-b6b9-b2d7d06735cd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0525f43e-0c56-493d-b6b9-b2d7d06735cd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004659513s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (22.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 149014: operation not permitted
helpers_test.go:508: unable to kill pid 148965: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context minikube apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [3a80f7e3-4bbb-4af9-a66d-b66842732c6f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [3a80f7e3-4bbb-4af9-a66d-b66842732c6f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003648896s
I0920 17:36:58.421648  111958 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context minikube get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.141.253 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-tgv78" [f6adcde5-9251-41f3-9748-4514676d5deb] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-tgv78" [f6adcde5-9251-41f3-9748-4514676d5deb] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.003867848s
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-tgv78 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-tgv78 -- mysql -ppassword -e "show databases;": exit status 1 (175.52116ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 17:37:15.955650  111958 retry.go:31] will retry after 769.166151ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-tgv78 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-tgv78 -- mysql -ppassword -e "show databases;": exit status 1 (107.35021ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 17:37:16.833392  111958 retry.go:31] will retry after 2.096949588s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-tgv78 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.41s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (14.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (14.369916646s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (14.37s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (14.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (14.466991366s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (14.47s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (13.23s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (13.233768112s)
--- PASS: TestImageBuild/serial/Setup (13.23s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.55s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube: (1.54564353s)
--- PASS: TestImageBuild/serial/NormalBuild (1.55s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.77s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.56s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.56s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.55s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (28.57s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (28.574390229s)
--- PASS: TestJSONOutput/start/Command (28.57s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.4s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.40s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.42s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (10.420669544s)
--- PASS: TestJSONOutput/stop/Command (10.42s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (62.447603ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9d3f9544-144b-4444-b545-a866882d9913","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2717dafb-9570-47c2-9448-e1255ba57814","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19679"}}
	{"specversion":"1.0","id":"56101328-3601-455d-ae56-3c8d4548a895","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"38178fb8-9059-45f9-a225-03f527cf221c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19679-105157/kubeconfig"}}
	{"specversion":"1.0","id":"9289bcc9-64e7-41e4-b13f-483909bee5a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-105157/.minikube"}}
	{"specversion":"1.0","id":"13d3e683-de32-4357-b119-43b4fde87f48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"fce99fae-5e9f-46d9-8364-93d10c0a8839","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2feb864b-b293-4e7e-91f3-d4adb01692b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (34.11s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (13.928664448s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (18.353542028s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.26785576s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (34.11s)

                                                
                                    
x
+
TestPause/serial/Start (24.97s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (24.971967623s)
--- PASS: TestPause/serial/Start (24.97s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.07s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (30.070752189s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.07s)

                                                
                                    
x
+
TestPause/serial/Pause (0.5s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.50s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.13s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (126.81634ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.13s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.41s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.41s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.54s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.54s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.95s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.950597352s)
--- PASS: TestPause/serial/DeletePaused (1.95s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (69.29s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2622332424 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2622332424 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (30.041484484s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (35.577554583s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (3.199611093s)
--- PASS: TestRunningBinaryUpgrade (69.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (50.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2831308124 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2831308124 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (14.766290449s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2831308124 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2831308124 -p minikube stop: (23.741166472s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (11.809600746s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (50.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.79s)

                                                
                                    
x
+
TestKubernetesUpgrade (314.71s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (27.52766971s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.333998279s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (71.604085ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m17.851212032s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (64.964871ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19679
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19679-105157/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19679-105157/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (17.565467241s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.234574604s)
--- PASS: TestKubernetesUpgrade (314.71s)

                                                
                                    

Test skip (56/167)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.31.1/preload-exists 0
14 TestDownloadOnly/v1.31.1/cached-images 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Ingress 0
37 TestAddons/parallel/Olm 0
41 TestAddons/parallel/LocalPath 0
45 TestCertOptions 0
47 TestDockerFlags 0
48 TestForceSystemdFlag 0
49 TestForceSystemdEnv 0
50 TestDockerEnvContainerd 0
51 TestKVMDriverInstallOrUpdate 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
54 TestErrorSpam 0
63 TestFunctional/serial/CacheCmd 0
77 TestFunctional/parallel/MountCmd 0
100 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
101 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
102 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
104 TestFunctional/parallel/SSHCmd 0
105 TestFunctional/parallel/CpCmd 0
107 TestFunctional/parallel/FileSync 0
108 TestFunctional/parallel/CertSync 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/ImageCommands 0
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0
125 TestGvisorAddon 0
126 TestMultiControlPlane 0
134 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
161 TestKicCustomNetwork 0
162 TestKicExistingNetwork 0
163 TestKicCustomSubnet 0
164 TestKicStaticIP 0
167 TestMountStart 0
168 TestMultiNode 0
169 TestNetworkPlugins 0
170 TestNoKubernetes 0
171 TestChangeNoneUser 0
182 TestPreload 0
183 TestScheduledStopWindows 0
184 TestScheduledStopUnix 0
185 TestSkaffold 0
188 TestStartStop/group/old-k8s-version 0.13
189 TestStartStop/group/newest-cni 0.13
190 TestStartStop/group/default-k8s-diff-port 0.13
191 TestStartStop/group/no-preload 0.13
192 TestStartStop/group/disable-driver-mounts 0.13
193 TestStartStop/group/embed-certs 0.13
194 TestInsufficientStorage 0
201 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:194: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:916: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1041: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1717: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1760: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1924: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1955: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:458: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:545: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:292: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2016: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.13s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.13s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard