Test Report: none_Linux 19678

                    
                      8ef5536409705b0cbf1ed8a719bbf7f792426b16:2024-09-20:36299
                    
                

Test fail (1/167)

Order failed test Duration
33 TestAddons/parallel/Registry 71.93
x
+
TestAddons/parallel/Registry (71.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.795488ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-8wpxs" [52c18ffd-2b22-48ba-9662-b376d4deaec2] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003636889s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-jdldd" [37c4212a-8b2f-468a-b4d8-ad804d98aea8] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003721082s
addons_test.go:338: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.082419067s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/09/20 18:01:01 [DEBUG] GET http://10.138.0.48:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 20 Sep 24 17:48 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 20 Sep 24 17:48 UTC | 20 Sep 24 17:48 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 20 Sep 24 17:48 UTC | 20 Sep 24 17:48 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 20 Sep 24 17:48 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 20 Sep 24 17:48 UTC | 20 Sep 24 17:48 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 20 Sep 24 17:48 UTC | 20 Sep 24 17:48 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 20 Sep 24 17:48 UTC | 20 Sep 24 17:48 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 20 Sep 24 17:48 UTC | 20 Sep 24 17:48 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 20 Sep 24 17:48 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:40073               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 20 Sep 24 17:48 UTC | 20 Sep 24 17:48 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 20 Sep 24 17:48 UTC | 20 Sep 24 17:49 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 20 Sep 24 17:49 UTC | 20 Sep 24 17:49 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 20 Sep 24 17:49 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 20 Sep 24 17:49 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 20 Sep 24 17:49 UTC | 20 Sep 24 17:51 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 20 Sep 24 17:51 UTC | 20 Sep 24 17:51 UTC |
	|         | volcano --alsologtostderr -v=1       |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 20 Sep 24 18:01 UTC | 20 Sep 24 18:01 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 20 Sep 24 18:01 UTC | 20 Sep 24 18:01 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:49:27
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:49:27.874843  212266 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:49:27.874964  212266 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:49:27.874975  212266 out.go:358] Setting ErrFile to fd 2...
	I0920 17:49:27.874981  212266 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:49:27.875159  212266 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-201891/.minikube/bin
	I0920 17:49:27.875755  212266 out.go:352] Setting JSON to false
	I0920 17:49:27.876632  212266 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5520,"bootTime":1726849048,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:49:27.876754  212266 start.go:139] virtualization: kvm guest
	I0920 17:49:27.878877  212266 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0920 17:49:27.880307  212266 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19678-201891/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 17:49:27.880327  212266 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 17:49:27.880353  212266 notify.go:220] Checking for updates...
	I0920 17:49:27.882612  212266 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:49:27.883907  212266 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-201891/kubeconfig
	I0920 17:49:27.885198  212266 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-201891/.minikube
	I0920 17:49:27.886415  212266 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:49:27.887774  212266 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:49:27.889107  212266 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:49:27.900596  212266 out.go:177] * Using the none driver based on user configuration
	I0920 17:49:27.901871  212266 start.go:297] selected driver: none
	I0920 17:49:27.901907  212266 start.go:901] validating driver "none" against <nil>
	I0920 17:49:27.901921  212266 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:49:27.901974  212266 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0920 17:49:27.902290  212266 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0920 17:49:27.902901  212266 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 17:49:27.903163  212266 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:49:27.903193  212266 cni.go:84] Creating CNI manager for ""
	I0920 17:49:27.903255  212266 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 17:49:27.903270  212266 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 17:49:27.903308  212266 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:49:27.904728  212266 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0920 17:49:27.906181  212266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/config.json ...
	I0920 17:49:27.906213  212266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/config.json: {Name:mk0c764e0f1c86adf2d1640eee96e6f738cf3729 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:49:27.906391  212266 start.go:360] acquireMachinesLock for minikube: {Name:mkae33472abca0783c7c654fc48c0d5fd1da07e1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 17:49:27.906425  212266 start.go:364] duration metric: took 18.106µs to acquireMachinesLock for "minikube"
	I0920 17:49:27.906438  212266 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 17:49:27.906520  212266 start.go:125] createHost starting for "" (driver="none")
	I0920 17:49:27.908069  212266 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0920 17:49:27.909361  212266 exec_runner.go:51] Run: systemctl --version
	I0920 17:49:27.912101  212266 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0920 17:49:27.912141  212266 client.go:168] LocalClient.Create starting
	I0920 17:49:27.912205  212266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19678-201891/.minikube/certs/ca.pem
	I0920 17:49:27.912242  212266 main.go:141] libmachine: Decoding PEM data...
	I0920 17:49:27.912259  212266 main.go:141] libmachine: Parsing certificate...
	I0920 17:49:27.912326  212266 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19678-201891/.minikube/certs/cert.pem
	I0920 17:49:27.912346  212266 main.go:141] libmachine: Decoding PEM data...
	I0920 17:49:27.912359  212266 main.go:141] libmachine: Parsing certificate...
	I0920 17:49:27.912726  212266 client.go:171] duration metric: took 572.344µs to LocalClient.Create
	I0920 17:49:27.912750  212266 start.go:167] duration metric: took 650.196µs to libmachine.API.Create "minikube"
	I0920 17:49:27.912757  212266 start.go:293] postStartSetup for "minikube" (driver="none")
	I0920 17:49:27.912805  212266 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:49:27.912839  212266 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:49:27.921530  212266 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 17:49:27.921553  212266 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 17:49:27.921561  212266 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 17:49:27.923685  212266 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0920 17:49:27.924938  212266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-201891/.minikube/addons for local assets ...
	I0920 17:49:27.924985  212266 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-201891/.minikube/files for local assets ...
	I0920 17:49:27.925014  212266 start.go:296] duration metric: took 12.248062ms for postStartSetup
	I0920 17:49:27.925788  212266 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/config.json ...
	I0920 17:49:27.925984  212266 start.go:128] duration metric: took 19.452924ms to createHost
	I0920 17:49:27.926000  212266 start.go:83] releasing machines lock for "minikube", held for 19.568392ms
	I0920 17:49:27.926405  212266 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 17:49:27.926498  212266 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0920 17:49:27.928853  212266 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 17:49:27.928928  212266 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:49:27.939249  212266 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 17:49:27.939292  212266 start.go:495] detecting cgroup driver to use...
	I0920 17:49:27.939327  212266 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 17:49:27.939451  212266 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:49:27.957454  212266 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0920 17:49:27.966942  212266 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 17:49:27.976281  212266 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 17:49:27.976355  212266 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 17:49:27.984918  212266 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 17:49:27.994189  212266 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 17:49:28.003655  212266 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 17:49:28.012421  212266 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:49:28.021091  212266 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 17:49:28.030460  212266 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 17:49:28.040153  212266 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 17:49:28.048547  212266 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:49:28.056591  212266 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:49:28.063476  212266 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0920 17:49:28.291058  212266 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0920 17:49:28.355959  212266 start.go:495] detecting cgroup driver to use...
	I0920 17:49:28.356016  212266 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 17:49:28.356126  212266 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:49:28.376157  212266 exec_runner.go:51] Run: which cri-dockerd
	I0920 17:49:28.377119  212266 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 17:49:28.384972  212266 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0920 17:49:28.384999  212266 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0920 17:49:28.385035  212266 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0920 17:49:28.393159  212266 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0920 17:49:28.393295  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1820548029 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0920 17:49:28.400674  212266 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0920 17:49:28.589297  212266 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0920 17:49:28.799654  212266 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 17:49:28.799808  212266 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0920 17:49:28.799822  212266 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0920 17:49:28.799869  212266 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0920 17:49:28.809200  212266 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0920 17:49:28.809346  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2103591991 /etc/docker/daemon.json
	I0920 17:49:28.817166  212266 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0920 17:49:29.026043  212266 exec_runner.go:51] Run: sudo systemctl restart docker
	I0920 17:49:29.330170  212266 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0920 17:49:29.341210  212266 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0920 17:49:29.356862  212266 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 17:49:29.368233  212266 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0920 17:49:29.582068  212266 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0920 17:49:29.795329  212266 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0920 17:49:30.009818  212266 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0920 17:49:30.024603  212266 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 17:49:30.035995  212266 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0920 17:49:30.272611  212266 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0920 17:49:30.344122  212266 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0920 17:49:30.344199  212266 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0920 17:49:30.345634  212266 start.go:563] Will wait 60s for crictl version
	I0920 17:49:30.345700  212266 exec_runner.go:51] Run: which crictl
	I0920 17:49:30.346649  212266 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0920 17:49:30.374735  212266 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0920 17:49:30.374797  212266 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0920 17:49:30.396051  212266 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0920 17:49:30.418729  212266 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0920 17:49:30.418825  212266 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0920 17:49:30.421692  212266 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0920 17:49:30.422788  212266 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 17:49:30.422905  212266 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 17:49:30.422916  212266 kubeadm.go:934] updating node { 10.138.0.48 8443 v1.31.1 docker true true} ...
	I0920 17:49:30.423005  212266 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-2 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.138.0.48 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0920 17:49:30.423087  212266 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0920 17:49:30.468843  212266 cni.go:84] Creating CNI manager for ""
	I0920 17:49:30.468870  212266 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 17:49:30.468883  212266 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 17:49:30.468905  212266 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.138.0.48 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-2 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.138.0.48"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.138.0.48 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 17:49:30.469117  212266 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.138.0.48
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-2"
	  kubeletExtraArgs:
	    node-ip: 10.138.0.48
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.138.0.48"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 17:49:30.469177  212266 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:49:30.478153  212266 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 17:49:30.478205  212266 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 17:49:30.486042  212266 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0920 17:49:30.486087  212266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-201891/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 17:49:30.486123  212266 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0920 17:49:30.486190  212266 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:49:30.486219  212266 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 17:49:30.486264  212266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-201891/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 17:49:30.497169  212266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-201891/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 17:49:30.537303  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1575120985 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 17:49:30.545106  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2343394310 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 17:49:30.575871  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1269843021 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 17:49:30.640785  212266 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 17:49:30.649050  212266 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0920 17:49:30.649071  212266 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0920 17:49:30.649114  212266 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0920 17:49:30.656518  212266 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0920 17:49:30.656719  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2890020410 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0920 17:49:30.664228  212266 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0920 17:49:30.664245  212266 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0920 17:49:30.664279  212266 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0920 17:49:30.671217  212266 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:49:30.671345  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3015674832 /lib/systemd/system/kubelet.service
	I0920 17:49:30.679470  212266 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0920 17:49:30.679594  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4196573614 /var/tmp/minikube/kubeadm.yaml.new
	I0920 17:49:30.687127  212266 exec_runner.go:51] Run: grep 10.138.0.48	control-plane.minikube.internal$ /etc/hosts
	I0920 17:49:30.688317  212266 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0920 17:49:30.880703  212266 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0920 17:49:30.894263  212266 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube for IP: 10.138.0.48
	I0920 17:49:30.894294  212266 certs.go:194] generating shared ca certs ...
	I0920 17:49:30.894321  212266 certs.go:226] acquiring lock for ca certs: {Name:mk4433f653c301b4f25e549600493157a8ba9e80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:49:30.894490  212266 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-201891/.minikube/ca.key
	I0920 17:49:30.894550  212266 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-201891/.minikube/proxy-client-ca.key
	I0920 17:49:30.894564  212266 certs.go:256] generating profile certs ...
	I0920 17:49:30.894647  212266 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/client.key
	I0920 17:49:30.894669  212266 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/client.crt with IP's: []
	I0920 17:49:31.041118  212266 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/client.crt ...
	I0920 17:49:31.041151  212266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/client.crt: {Name:mk35e3971fc332b094c8682739d982f188678791 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:49:31.041295  212266 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/client.key ...
	I0920 17:49:31.041307  212266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/client.key: {Name:mkfa674c5ddf33655de600eac6d0172d9d54921e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:49:31.041370  212266 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/apiserver.key.35c0634a
	I0920 17:49:31.041388  212266 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/apiserver.crt.35c0634a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.138.0.48]
	I0920 17:49:31.123114  212266 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/apiserver.crt.35c0634a ...
	I0920 17:49:31.123146  212266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/apiserver.crt.35c0634a: {Name:mk39c16392214dfc069e4d0577339b28a0fe22f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:49:31.123284  212266 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/apiserver.key.35c0634a ...
	I0920 17:49:31.123296  212266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/apiserver.key.35c0634a: {Name:mkc3a96d7c5c17e1014af0813155d0da831dfc9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:49:31.123351  212266 certs.go:381] copying /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/apiserver.crt.35c0634a -> /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/apiserver.crt
	I0920 17:49:31.123426  212266 certs.go:385] copying /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/apiserver.key.35c0634a -> /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/apiserver.key
	I0920 17:49:31.123477  212266 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/proxy-client.key
	I0920 17:49:31.123491  212266 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0920 17:49:31.214550  212266 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/proxy-client.crt ...
	I0920 17:49:31.214581  212266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/proxy-client.crt: {Name:mke52188319dff502cf0fe3aa989dc2c0e4a3d86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:49:31.214717  212266 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/proxy-client.key ...
	I0920 17:49:31.214727  212266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/proxy-client.key: {Name:mk1d9c68e65f41f5dd28c4a210a80c7db62cc733 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:49:31.214889  212266 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-201891/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 17:49:31.214921  212266 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-201891/.minikube/certs/ca.pem (1082 bytes)
	I0920 17:49:31.214944  212266 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-201891/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:49:31.214978  212266 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-201891/.minikube/certs/key.pem (1675 bytes)
	I0920 17:49:31.215567  212266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-201891/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:49:31.215690  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1556099807 /var/lib/minikube/certs/ca.crt
	I0920 17:49:31.224257  212266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-201891/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:49:31.224399  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1150746279 /var/lib/minikube/certs/ca.key
	I0920 17:49:31.232157  212266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-201891/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:49:31.232292  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2566300951 /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 17:49:31.240120  212266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-201891/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 17:49:31.240259  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube761480827 /var/lib/minikube/certs/proxy-client-ca.key
	I0920 17:49:31.248056  212266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0920 17:49:31.248185  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2284572823 /var/lib/minikube/certs/apiserver.crt
	I0920 17:49:31.255878  212266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 17:49:31.256013  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2666693893 /var/lib/minikube/certs/apiserver.key
	I0920 17:49:31.263737  212266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:49:31.263842  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube552008797 /var/lib/minikube/certs/proxy-client.crt
	I0920 17:49:31.272387  212266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-201891/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 17:49:31.272515  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2446665156 /var/lib/minikube/certs/proxy-client.key
	I0920 17:49:31.280089  212266 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0920 17:49:31.280105  212266 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:49:31.280136  212266 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:49:31.287748  212266 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-201891/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:49:31.287892  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1935970676 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:49:31.296190  212266 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 17:49:31.296323  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2716485689 /var/lib/minikube/kubeconfig
	I0920 17:49:31.304578  212266 exec_runner.go:51] Run: openssl version
	I0920 17:49:31.307289  212266 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:49:31.315766  212266 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:49:31.317118  212266 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 20 17:49 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:49:31.317166  212266 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:49:31.319956  212266 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:49:31.327792  212266 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:49:31.328893  212266 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 17:49:31.328931  212266 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:49:31.329046  212266 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 17:49:31.344536  212266 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 17:49:31.353185  212266 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 17:49:31.360777  212266 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0920 17:49:31.381357  212266 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 17:49:31.389436  212266 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 17:49:31.389464  212266 kubeadm.go:157] found existing configuration files:
	
	I0920 17:49:31.389506  212266 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 17:49:31.397509  212266 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 17:49:31.397589  212266 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 17:49:31.404847  212266 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 17:49:31.412371  212266 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 17:49:31.412433  212266 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 17:49:31.420160  212266 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 17:49:31.427847  212266 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 17:49:31.427907  212266 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 17:49:31.435767  212266 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 17:49:31.444808  212266 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 17:49:31.444866  212266 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 17:49:31.452098  212266 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 17:49:31.485724  212266 kubeadm.go:310] W0920 17:49:31.485590  213156 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:49:31.486257  212266 kubeadm.go:310] W0920 17:49:31.486217  213156 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:49:31.487953  212266 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 17:49:31.488005  212266 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 17:49:31.589007  212266 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 17:49:31.589134  212266 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 17:49:31.589156  212266 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 17:49:31.589165  212266 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 17:49:31.601573  212266 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 17:49:31.604562  212266 out.go:235]   - Generating certificates and keys ...
	I0920 17:49:31.604646  212266 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 17:49:31.604700  212266 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 17:49:31.690949  212266 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 17:49:31.875152  212266 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 17:49:32.129576  212266 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 17:49:32.444095  212266 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 17:49:32.648283  212266 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 17:49:32.648325  212266 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0920 17:49:32.773071  212266 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 17:49:32.773139  212266 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-2] and IPs [10.138.0.48 127.0.0.1 ::1]
	I0920 17:49:32.894484  212266 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 17:49:32.988612  212266 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 17:49:33.071154  212266 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 17:49:33.071311  212266 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 17:49:33.132674  212266 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 17:49:33.294761  212266 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 17:49:33.444018  212266 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 17:49:33.610971  212266 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 17:49:33.836465  212266 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 17:49:33.837031  212266 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 17:49:33.839444  212266 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 17:49:33.841722  212266 out.go:235]   - Booting up control plane ...
	I0920 17:49:33.841752  212266 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 17:49:33.841770  212266 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 17:49:33.841776  212266 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 17:49:33.862245  212266 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 17:49:33.867692  212266 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 17:49:33.867717  212266 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 17:49:34.103192  212266 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 17:49:34.103214  212266 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 17:49:35.104734  212266 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001481413s
	I0920 17:49:35.104762  212266 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 17:49:39.106460  212266 kubeadm.go:310] [api-check] The API server is healthy after 4.001744977s
	I0920 17:49:39.116258  212266 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 17:49:39.125026  212266 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 17:49:39.141508  212266 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 17:49:39.141529  212266 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-2 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 17:49:39.150143  212266 kubeadm.go:310] [bootstrap-token] Using token: p3xinw.61l2gv6oiewwpe4t
	I0920 17:49:39.151465  212266 out.go:235]   - Configuring RBAC rules ...
	I0920 17:49:39.151524  212266 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 17:49:39.155269  212266 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 17:49:39.160339  212266 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 17:49:39.162838  212266 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 17:49:39.166682  212266 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 17:49:39.181617  212266 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 17:49:39.513350  212266 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 17:49:39.929917  212266 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 17:49:40.511576  212266 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 17:49:40.512332  212266 kubeadm.go:310] 
	I0920 17:49:40.512346  212266 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 17:49:40.512350  212266 kubeadm.go:310] 
	I0920 17:49:40.512354  212266 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 17:49:40.512358  212266 kubeadm.go:310] 
	I0920 17:49:40.512363  212266 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 17:49:40.512367  212266 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 17:49:40.512371  212266 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 17:49:40.512375  212266 kubeadm.go:310] 
	I0920 17:49:40.512378  212266 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 17:49:40.512390  212266 kubeadm.go:310] 
	I0920 17:49:40.512395  212266 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 17:49:40.512399  212266 kubeadm.go:310] 
	I0920 17:49:40.512402  212266 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 17:49:40.512407  212266 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 17:49:40.512411  212266 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 17:49:40.512415  212266 kubeadm.go:310] 
	I0920 17:49:40.512420  212266 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 17:49:40.512427  212266 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 17:49:40.512431  212266 kubeadm.go:310] 
	I0920 17:49:40.512435  212266 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token p3xinw.61l2gv6oiewwpe4t \
	I0920 17:49:40.512439  212266 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d19285452187d4b025069ff023c2b295c0bc0cfdef1c40f49315c1608f56e924 \
	I0920 17:49:40.512441  212266 kubeadm.go:310] 	--control-plane 
	I0920 17:49:40.512444  212266 kubeadm.go:310] 
	I0920 17:49:40.512446  212266 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 17:49:40.512449  212266 kubeadm.go:310] 
	I0920 17:49:40.512451  212266 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token p3xinw.61l2gv6oiewwpe4t \
	I0920 17:49:40.512454  212266 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d19285452187d4b025069ff023c2b295c0bc0cfdef1c40f49315c1608f56e924 
	I0920 17:49:40.515223  212266 cni.go:84] Creating CNI manager for ""
	I0920 17:49:40.515249  212266 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 17:49:40.517059  212266 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 17:49:40.518488  212266 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0920 17:49:40.528893  212266 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 17:49:40.529051  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube853999080 /etc/cni/net.d/1-k8s.conflist
	I0920 17:49:40.537512  212266 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 17:49:40.537570  212266 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:49:40.537598  212266 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-2 minikube.k8s.io/updated_at=2024_09_20T17_49_40_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0920 17:49:40.546294  212266 ops.go:34] apiserver oom_adj: -16
	I0920 17:49:40.605937  212266 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:49:41.106190  212266 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:49:41.606378  212266 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:49:42.106140  212266 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:49:42.605984  212266 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:49:43.106375  212266 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:49:43.606431  212266 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:49:44.106930  212266 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:49:44.606926  212266 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:49:45.106666  212266 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:49:45.606357  212266 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:49:45.672080  212266 kubeadm.go:1113] duration metric: took 5.134552799s to wait for elevateKubeSystemPrivileges
	I0920 17:49:45.672121  212266 kubeadm.go:394] duration metric: took 14.343192538s to StartCluster
	I0920 17:49:45.672150  212266 settings.go:142] acquiring lock: {Name:mkdca67a4e5c47f4c32628c0d8cdaa585945f163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:49:45.672234  212266 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19678-201891/kubeconfig
	I0920 17:49:45.673025  212266 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-201891/kubeconfig: {Name:mkd363355e06d8f7c5b6ee4bbfa2d3c29ac40039 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:49:45.673315  212266 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 17:49:45.673296  212266 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 17:49:45.673436  212266 addons.go:69] Setting yakd=true in profile "minikube"
	I0920 17:49:45.673451  212266 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0920 17:49:45.673466  212266 addons.go:234] Setting addon yakd=true in "minikube"
	I0920 17:49:45.673475  212266 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0920 17:49:45.673504  212266 host.go:66] Checking if "minikube" exists ...
	I0920 17:49:45.673510  212266 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0920 17:49:45.673506  212266 addons.go:69] Setting registry=true in profile "minikube"
	I0920 17:49:45.673535  212266 host.go:66] Checking if "minikube" exists ...
	I0920 17:49:45.673547  212266 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0920 17:49:45.673551  212266 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0920 17:49:45.673565  212266 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0920 17:49:45.673573  212266 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0920 17:49:45.673583  212266 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0920 17:49:45.673596  212266 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0920 17:49:45.673637  212266 host.go:66] Checking if "minikube" exists ...
	I0920 17:49:45.673535  212266 addons.go:234] Setting addon registry=true in "minikube"
	I0920 17:49:45.674349  212266 host.go:66] Checking if "minikube" exists ...
	I0920 17:49:45.673535  212266 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:49:45.673557  212266 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0920 17:49:45.674484  212266 host.go:66] Checking if "minikube" exists ...
	I0920 17:49:45.674751  212266 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:49:45.674775  212266 api_server.go:166] Checking apiserver status ...
	I0920 17:49:45.674816  212266 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:49:45.674835  212266 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:49:45.674859  212266 api_server.go:166] Checking apiserver status ...
	I0920 17:49:45.674892  212266 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:49:45.675361  212266 out.go:177] * Configuring local host environment ...
	I0920 17:49:45.675370  212266 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0920 17:49:45.675399  212266 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0920 17:49:45.675437  212266 host.go:66] Checking if "minikube" exists ...
	I0920 17:49:45.675557  212266 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:49:45.675581  212266 api_server.go:166] Checking apiserver status ...
	I0920 17:49:45.675617  212266 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:49:45.676311  212266 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:49:45.676334  212266 api_server.go:166] Checking apiserver status ...
	I0920 17:49:45.676384  212266 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:49:45.673434  212266 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0920 17:49:45.676583  212266 mustload.go:65] Loading cluster: minikube
	I0920 17:49:45.676835  212266 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:49:45.677048  212266 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:49:45.677065  212266 api_server.go:166] Checking apiserver status ...
	I0920 17:49:45.677098  212266 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:49:45.673467  212266 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0920 17:49:45.677412  212266 host.go:66] Checking if "minikube" exists ...
	I0920 17:49:45.677414  212266 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:49:45.677430  212266 api_server.go:166] Checking apiserver status ...
	I0920 17:49:45.677463  212266 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:49:45.678246  212266 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:49:45.678256  212266 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:49:45.678261  212266 api_server.go:166] Checking apiserver status ...
	I0920 17:49:45.678273  212266 api_server.go:166] Checking apiserver status ...
	I0920 17:49:45.678303  212266 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:49:45.678306  212266 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:49:45.678470  212266 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0920 17:49:45.678488  212266 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0920 17:49:45.678580  212266 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0920 17:49:45.678601  212266 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0920 17:49:45.678631  212266 host.go:66] Checking if "minikube" exists ...
	I0920 17:49:45.678633  212266 addons.go:69] Setting volcano=true in profile "minikube"
	I0920 17:49:45.678651  212266 addons.go:234] Setting addon volcano=true in "minikube"
	I0920 17:49:45.678704  212266 host.go:66] Checking if "minikube" exists ...
	I0920 17:49:45.679475  212266 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:49:45.679502  212266 api_server.go:166] Checking apiserver status ...
	I0920 17:49:45.679537  212266 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:49:45.679549  212266 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:49:45.679564  212266 api_server.go:166] Checking apiserver status ...
	I0920 17:49:45.679600  212266 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0920 17:49:45.679830  212266 out.go:270] * 
	W0920 17:49:45.679884  212266 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0920 17:49:45.679906  212266 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0920 17:49:45.679917  212266 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0920 17:49:45.679927  212266 out.go:270] * 
	I0920 17:49:45.673576  212266 addons.go:234] Setting addon metrics-server=true in "minikube"
	W0920 17:49:45.679982  212266 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0920 17:49:45.679993  212266 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0920 17:49:45.679999  212266 out.go:270] * 
	I0920 17:49:45.679998  212266 host.go:66] Checking if "minikube" exists ...
	W0920 17:49:45.680020  212266 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0920 17:49:45.680027  212266 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0920 17:49:45.680032  212266 out.go:270] * 
	W0920 17:49:45.680038  212266 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0920 17:49:45.680066  212266 start.go:235] Will wait 6m0s for node &{Name: IP:10.138.0.48 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 17:49:45.682272  212266 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:49:45.682301  212266 api_server.go:166] Checking apiserver status ...
	I0920 17:49:45.682333  212266 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:49:45.682575  212266 out.go:177] * Verifying Kubernetes components...
	I0920 17:49:45.686202  212266 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0920 17:49:45.696406  212266 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/213584/cgroup
	I0920 17:49:45.701791  212266 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/213584/cgroup
	I0920 17:49:45.704569  212266 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/213584/cgroup
	I0920 17:49:45.704977  212266 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/213584/cgroup
	I0920 17:49:45.710835  212266 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:49:45.710860  212266 api_server.go:166] Checking apiserver status ...
	I0920 17:49:45.710891  212266 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:49:45.713118  212266 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/213584/cgroup
	I0920 17:49:45.724257  212266 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5"
	I0920 17:49:45.724332  212266 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5/freezer.state
	I0920 17:49:45.727387  212266 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5"
	I0920 17:49:45.728365  212266 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5/freezer.state
	I0920 17:49:45.728759  212266 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/213584/cgroup
	I0920 17:49:45.728770  212266 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/213584/cgroup
	I0920 17:49:45.738787  212266 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:49:45.738816  212266 api_server.go:166] Checking apiserver status ...
	I0920 17:49:45.738859  212266 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:49:45.742340  212266 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/213584/cgroup
	I0920 17:49:45.742529  212266 api_server.go:204] freezer state: "THAWED"
	I0920 17:49:45.742555  212266 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:49:45.742613  212266 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/213584/cgroup
	I0920 17:49:45.743658  212266 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/213584/cgroup
	I0920 17:49:45.749029  212266 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:49:45.751082  212266 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/213584/cgroup
	I0920 17:49:45.753033  212266 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 17:49:45.753050  212266 api_server.go:204] freezer state: "THAWED"
	I0920 17:49:45.753085  212266 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:49:45.753807  212266 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5"
	I0920 17:49:45.753931  212266 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5/freezer.state
	I0920 17:49:45.755081  212266 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 17:49:45.755110  212266 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 17:49:45.755250  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3831823354 /etc/kubernetes/addons/deployment.yaml
	I0920 17:49:45.756893  212266 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5"
	I0920 17:49:45.756942  212266 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5/freezer.state
	I0920 17:49:45.757669  212266 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:49:45.759382  212266 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 17:49:45.760055  212266 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/213584/cgroup
	I0920 17:49:45.761026  212266 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 17:49:45.761056  212266 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 17:49:45.761185  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3059161465 /etc/kubernetes/addons/ig-namespace.yaml
	I0920 17:49:45.762895  212266 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5"
	I0920 17:49:45.762977  212266 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5/freezer.state
	I0920 17:49:45.764139  212266 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5"
	I0920 17:49:45.764195  212266 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5/freezer.state
	I0920 17:49:45.765108  212266 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5"
	I0920 17:49:45.765156  212266 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5/freezer.state
	I0920 17:49:45.767342  212266 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/213584/cgroup
	I0920 17:49:45.767611  212266 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5"
	I0920 17:49:45.767664  212266 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5/freezer.state
	I0920 17:49:45.774346  212266 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5"
	I0920 17:49:45.774400  212266 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5/freezer.state
	I0920 17:49:45.782604  212266 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 17:49:45.789172  212266 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5"
	I0920 17:49:45.789406  212266 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5/freezer.state
	I0920 17:49:45.790434  212266 api_server.go:204] freezer state: "THAWED"
	I0920 17:49:45.790460  212266 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:49:45.791015  212266 api_server.go:204] freezer state: "THAWED"
	I0920 17:49:45.791034  212266 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:49:45.793453  212266 api_server.go:204] freezer state: "THAWED"
	I0920 17:49:45.793476  212266 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:49:45.793561  212266 api_server.go:204] freezer state: "THAWED"
	I0920 17:49:45.793572  212266 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:49:45.797936  212266 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5"
	I0920 17:49:45.797992  212266 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5/freezer.state
	I0920 17:49:45.798711  212266 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5"
	I0920 17:49:45.798761  212266 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5/freezer.state
	I0920 17:49:45.798777  212266 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5"
	I0920 17:49:45.798825  212266 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5/freezer.state
	I0920 17:49:45.799082  212266 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:49:45.799437  212266 api_server.go:204] freezer state: "THAWED"
	I0920 17:49:45.799456  212266 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:49:45.799461  212266 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 17:49:45.799483  212266 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 17:49:45.799593  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2587944498 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 17:49:45.799966  212266 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:49:45.799983  212266 host.go:66] Checking if "minikube" exists ...
	I0920 17:49:45.801682  212266 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 17:49:45.802491  212266 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:49:45.803065  212266 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:49:45.803765  212266 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 17:49:45.804968  212266 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:49:45.805277  212266 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 17:49:45.805301  212266 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 17:49:45.805957  212266 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0920 17:49:45.805995  212266 host.go:66] Checking if "minikube" exists ...
	I0920 17:49:45.806678  212266 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:49:45.806696  212266 api_server.go:166] Checking apiserver status ...
	I0920 17:49:45.806732  212266 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:49:45.808262  212266 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 17:49:45.808677  212266 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 17:49:45.808830  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2040489293 /etc/kubernetes/addons/registry-rc.yaml
	I0920 17:49:45.808992  212266 api_server.go:204] freezer state: "THAWED"
	I0920 17:49:45.809012  212266 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:49:45.809151  212266 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:49:45.809169  212266 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0920 17:49:45.809176  212266 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:49:45.809212  212266 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:49:45.809263  212266 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 17:49:45.810684  212266 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 17:49:45.811848  212266 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 17:49:45.813380  212266 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:49:45.814533  212266 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 17:49:45.814532  212266 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 17:49:45.815299  212266 api_server.go:204] freezer state: "THAWED"
	I0920 17:49:45.815312  212266 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:49:45.816125  212266 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 17:49:45.816156  212266 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 17:49:45.816278  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2454263588 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 17:49:45.817347  212266 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 17:49:45.818449  212266 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 17:49:45.818613  212266 api_server.go:204] freezer state: "THAWED"
	I0920 17:49:45.818632  212266 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:49:45.819229  212266 api_server.go:204] freezer state: "THAWED"
	I0920 17:49:45.819249  212266 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:49:45.820136  212266 api_server.go:204] freezer state: "THAWED"
	I0920 17:49:45.820156  212266 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:49:45.820638  212266 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 17:49:45.820666  212266 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 17:49:45.820886  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube222915610 /etc/kubernetes/addons/ig-role.yaml
	I0920 17:49:45.821079  212266 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 17:49:45.822306  212266 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 17:49:45.822334  212266 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 17:49:45.822451  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2334461810 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 17:49:45.825522  212266 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:49:45.826403  212266 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:49:45.826594  212266 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:49:45.827099  212266 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 17:49:45.827486  212266 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0920 17:49:45.827532  212266 host.go:66] Checking if "minikube" exists ...
	I0920 17:49:45.828552  212266 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 17:49:45.828584  212266 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 17:49:45.829836  212266 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:49:45.829853  212266 api_server.go:166] Checking apiserver status ...
	I0920 17:49:45.829888  212266 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:49:45.830026  212266 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 17:49:45.830765  212266 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:49:45.831800  212266 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 17:49:45.835254  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2386460936 /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:49:45.833249  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2361242533 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 17:49:45.834844  212266 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 17:49:45.835680  212266 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 17:49:45.836021  212266 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 17:49:45.836092  212266 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 17:49:45.836735  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1965184786 /etc/kubernetes/addons/registry-svc.yaml
	I0920 17:49:45.836753  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3717362977 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 17:49:45.837036  212266 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0920 17:49:45.838598  212266 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0920 17:49:45.840471  212266 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0920 17:49:45.844181  212266 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 17:49:45.844218  212266 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0920 17:49:45.845774  212266 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 17:49:45.845806  212266 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 17:49:45.845982  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2816993319 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 17:49:45.846128  212266 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/213584/cgroup
	I0920 17:49:45.846869  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1473382955 /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 17:49:45.858476  212266 api_server.go:204] freezer state: "THAWED"
	I0920 17:49:45.858506  212266 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:49:45.862994  212266 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 17:49:45.863024  212266 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 17:49:45.863308  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2169432490 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 17:49:45.863567  212266 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:49:45.863788  212266 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:49:45.863874  212266 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 17:49:45.863897  212266 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 17:49:45.864013  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4147491537 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 17:49:45.864287  212266 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 17:49:45.864308  212266 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 17:49:45.864323  212266 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 17:49:45.864413  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2402042748 /etc/kubernetes/addons/registry-proxy.yaml
	I0920 17:49:45.866078  212266 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 17:49:45.867091  212266 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 17:49:45.867136  212266 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 17:49:45.867258  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2433487689 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 17:49:45.873064  212266 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 17:49:45.873097  212266 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 17:49:45.873665  212266 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/213584/cgroup
	I0920 17:49:45.874026  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube951893707 /etc/kubernetes/addons/yakd-ns.yaml
	I0920 17:49:45.880351  212266 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 17:49:45.880383  212266 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 17:49:45.880585  212266 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5"
	I0920 17:49:45.880652  212266 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5/freezer.state
	I0920 17:49:45.881973  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube37868993 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 17:49:45.884252  212266 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 17:49:45.887560  212266 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 17:49:45.891889  212266 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 17:49:45.892669  212266 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 17:49:45.893012  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube816872352 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 17:49:45.893890  212266 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 17:49:45.895251  212266 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 17:49:45.895282  212266 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 17:49:45.895422  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube490007128 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 17:49:45.896321  212266 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 17:49:45.896343  212266 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 17:49:45.896456  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3254860732 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 17:49:45.917593  212266 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 17:49:45.917640  212266 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 17:49:45.917786  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube612782590 /etc/kubernetes/addons/yakd-sa.yaml
	I0920 17:49:45.917973  212266 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 17:49:45.917991  212266 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 17:49:45.918105  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1875663176 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 17:49:45.923985  212266 api_server.go:204] freezer state: "THAWED"
	I0920 17:49:45.924020  212266 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:49:45.926871  212266 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 17:49:45.926911  212266 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 17:49:45.927049  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube664844668 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 17:49:45.931620  212266 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:49:45.931668  212266 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 17:49:45.931684  212266 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0920 17:49:45.931691  212266 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0920 17:49:45.931734  212266 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0920 17:49:45.932160  212266 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5"
	I0920 17:49:45.932208  212266 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5/freezer.state
	I0920 17:49:45.945726  212266 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 17:49:45.945762  212266 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 17:49:45.945887  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube718707978 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 17:49:45.946160  212266 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 17:49:45.946188  212266 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 17:49:45.946289  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube512799550 /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 17:49:45.949074  212266 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 17:49:45.949094  212266 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 17:49:45.949175  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube43361329 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 17:49:45.950288  212266 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 17:49:45.950306  212266 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 17:49:45.950385  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2684727876 /etc/kubernetes/addons/yakd-crb.yaml
	I0920 17:49:45.961341  212266 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 17:49:45.961377  212266 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 17:49:45.961499  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3447101534 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 17:49:45.978785  212266 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 17:49:45.981370  212266 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 17:49:45.981404  212266 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 17:49:45.981536  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2165753192 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 17:49:45.986318  212266 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:49:45.986354  212266 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 17:49:45.986766  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube348724928 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:49:45.987117  212266 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 17:49:45.987150  212266 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 17:49:45.987267  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1268281099 /etc/kubernetes/addons/yakd-svc.yaml
	I0920 17:49:45.996374  212266 api_server.go:204] freezer state: "THAWED"
	I0920 17:49:45.996411  212266 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:49:46.001972  212266 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:49:46.015156  212266 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 17:49:46.015191  212266 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 17:49:46.015312  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3448061313 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 17:49:46.016321  212266 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 17:49:46.016351  212266 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 17:49:46.016473  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2365076787 /etc/kubernetes/addons/yakd-dp.yaml
	I0920 17:49:46.022238  212266 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 17:49:46.022268  212266 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 17:49:46.022420  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3720851292 /etc/kubernetes/addons/ig-crd.yaml
	I0920 17:49:46.022755  212266 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 17:49:46.024792  212266 out.go:177]   - Using image docker.io/busybox:stable
	I0920 17:49:46.027142  212266 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 17:49:46.027182  212266 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 17:49:46.027344  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4035916785 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 17:49:46.031852  212266 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 17:49:46.031880  212266 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 17:49:46.032000  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3286493503 /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 17:49:46.035563  212266 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 17:49:46.035587  212266 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 17:49:46.035709  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube461380115 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 17:49:46.036995  212266 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:49:46.056881  212266 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 17:49:46.060886  212266 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 17:49:46.080028  212266 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 17:49:46.080221  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3940200583 /etc/kubernetes/addons/storageclass.yaml
	I0920 17:49:46.117342  212266 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0920 17:49:46.118263  212266 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 17:49:46.148134  212266 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 17:49:46.148180  212266 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 17:49:46.148341  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4268546700 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 17:49:46.156132  212266 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 17:49:46.202611  212266 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-2" to be "Ready" ...
	I0920 17:49:46.213092  212266 node_ready.go:49] node "ubuntu-20-agent-2" has status "Ready":"True"
	I0920 17:49:46.213125  212266 node_ready.go:38] duration metric: took 10.472881ms for node "ubuntu-20-agent-2" to be "Ready" ...
	I0920 17:49:46.213139  212266 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:49:46.223391  212266 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cv7dr" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:46.264304  212266 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 17:49:46.264540  212266 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 17:49:46.267943  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2575056589 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 17:49:46.379738  212266 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 17:49:46.379776  212266 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 17:49:46.379958  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3867980983 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 17:49:46.458286  212266 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 17:49:46.506184  212266 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0920 17:49:46.893954  212266 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.03033611s)
	I0920 17:49:46.921057  212266 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.027112476s)
	I0920 17:49:46.921091  212266 addons.go:475] Verifying addon registry=true in "minikube"
	I0920 17:49:46.922770  212266 out.go:177] * Verifying registry addon...
	I0920 17:49:46.925651  212266 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 17:49:46.939407  212266 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 17:49:46.939433  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:47.013775  212266 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0920 17:49:47.171394  212266 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.192528151s)
	I0920 17:49:47.171437  212266 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0920 17:49:47.200816  212266 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.139845963s)
	I0920 17:49:47.276590  212266 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.158142113s)
	I0920 17:49:47.278461  212266 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0920 17:49:47.286212  212266 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.229274797s)
	I0920 17:49:47.436641  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:47.746987  212266 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.709923289s)
	W0920 17:49:47.747032  212266 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 17:49:47.747062  212266 retry.go:31] will retry after 297.133819ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 17:49:47.933346  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:48.044815  212266 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:49:48.232723  212266 pod_ready.go:103] pod "coredns-7c65d6cfc9-cv7dr" in "kube-system" namespace has status "Ready":"False"
	I0920 17:49:48.437952  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:48.935735  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:48.970625  212266 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.086319329s)
	I0920 17:49:49.154208  212266 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.695849121s)
	I0920 17:49:49.154248  212266 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0920 17:49:49.156014  212266 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 17:49:49.158868  212266 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 17:49:49.167680  212266 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 17:49:49.167706  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:49.430072  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:49.664859  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:49.930336  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:50.163516  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:50.429593  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:50.664055  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:50.730517  212266 pod_ready.go:103] pod "coredns-7c65d6cfc9-cv7dr" in "kube-system" namespace has status "Ready":"False"
	I0920 17:49:50.869883  212266 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.825007866s)
	I0920 17:49:50.930062  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:51.164568  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:51.429822  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:51.665410  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:51.929932  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:52.164383  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:52.230139  212266 pod_ready.go:93] pod "coredns-7c65d6cfc9-cv7dr" in "kube-system" namespace has status "Ready":"True"
	I0920 17:49:52.230164  212266 pod_ready.go:82] duration metric: took 6.00629094s for pod "coredns-7c65d6cfc9-cv7dr" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:52.230175  212266 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fztcv" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:52.430382  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:52.663320  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:52.830709  212266 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 17:49:52.830876  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3475420155 /var/lib/minikube/google_application_credentials.json
	I0920 17:49:52.849781  212266 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 17:49:52.849949  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3777582606 /var/lib/minikube/google_cloud_project
	I0920 17:49:52.873296  212266 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0920 17:49:52.873415  212266 host.go:66] Checking if "minikube" exists ...
	I0920 17:49:52.874194  212266 kubeconfig.go:125] found "minikube" server: "https://10.138.0.48:8443"
	I0920 17:49:52.874224  212266 api_server.go:166] Checking apiserver status ...
	I0920 17:49:52.874277  212266 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:49:52.890641  212266 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/213584/cgroup
	I0920 17:49:52.899966  212266 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5"
	I0920 17:49:52.900034  212266 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/poda30c9a7effa6f4f8172b7ac23690210b/2352ef03642d1cd07fb0e376d004711d4ef3e0406ca0648ee6b97a2dd77feea5/freezer.state
	I0920 17:49:52.908346  212266 api_server.go:204] freezer state: "THAWED"
	I0920 17:49:52.908373  212266 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:49:52.912293  212266 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:49:52.912346  212266 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 17:49:52.929157  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:52.950706  212266 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 17:49:53.080549  212266 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 17:49:53.156298  212266 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 17:49:53.156362  212266 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 17:49:53.156562  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2636899175 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 17:49:53.163917  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:53.166727  212266 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 17:49:53.166795  212266 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 17:49:53.166940  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1564914885 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 17:49:53.177424  212266 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 17:49:53.177452  212266 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 17:49:53.177642  212266 exec_runner.go:51] Run: sudo cp -a /tmp/minikube291962777 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 17:49:53.188155  212266 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 17:49:53.429636  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:53.554563  212266 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0920 17:49:53.555940  212266 out.go:177] * Verifying gcp-auth addon...
	I0920 17:49:53.557926  212266 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 17:49:53.560423  212266 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 17:49:53.663386  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:53.735296  212266 pod_ready.go:98] pod "coredns-7c65d6cfc9-fztcv" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:49:53 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:49:45 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:49:45 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:49:45 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:49:45 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.48 HostIPs:[{IP:10.138.0.48}]
PodIP:10.244.0.4 PodIPs:[{IP:10.244.0.4}] StartTime:2024-09-20 17:49:45 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-20 17:49:46 +0000 UTC,FinishedAt:2024-09-20 17:49:52 +0000 UTC,ContainerID:docker://a5ae3157fc91b078833d9229d754deae49974d3c781e4c9a2e0364b34002209c,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://a5ae3157fc91b078833d9229d754deae49974d3c781e4c9a2e0364b34002209c Started:0xc0001336c0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00048a410} {Name:kube-api-access-hsbmf MountPath:/var/run/secrets/kubernetes.io/serviceaccount R
eadOnly:true RecursiveReadOnly:0xc00048a430}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 17:49:53.735322  212266 pod_ready.go:82] duration metric: took 1.505141283s for pod "coredns-7c65d6cfc9-fztcv" in "kube-system" namespace to be "Ready" ...
	E0920 17:49:53.735332  212266 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-fztcv" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:49:53 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:49:45 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:49:45 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:49:45 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:49:45 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.
48 HostIPs:[{IP:10.138.0.48}] PodIP:10.244.0.4 PodIPs:[{IP:10.244.0.4}] StartTime:2024-09-20 17:49:45 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-20 17:49:46 +0000 UTC,FinishedAt:2024-09-20 17:49:52 +0000 UTC,ContainerID:docker://a5ae3157fc91b078833d9229d754deae49974d3c781e4c9a2e0364b34002209c,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://a5ae3157fc91b078833d9229d754deae49974d3c781e4c9a2e0364b34002209c Started:0xc0001336c0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00048a410} {Name:kube-api-access-hsbmf MountPath:/var/run/secrets/k
ubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc00048a430}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 17:49:53.735348  212266 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:53.739247  212266 pod_ready.go:93] pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0920 17:49:53.739265  212266 pod_ready.go:82] duration metric: took 3.907195ms for pod "etcd-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:53.739274  212266 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:53.742807  212266 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0920 17:49:53.742823  212266 pod_ready.go:82] duration metric: took 3.542394ms for pod "kube-apiserver-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:53.742832  212266 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:53.746414  212266 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0920 17:49:53.746431  212266 pod_ready.go:82] duration metric: took 3.593188ms for pod "kube-controller-manager-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:53.746440  212266 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2g6ds" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:53.828201  212266 pod_ready.go:93] pod "kube-proxy-2g6ds" in "kube-system" namespace has status "Ready":"True"
	I0920 17:49:53.828224  212266 pod_ready.go:82] duration metric: took 81.778286ms for pod "kube-proxy-2g6ds" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:53.828234  212266 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:53.929065  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:54.162951  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:54.227385  212266 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace has status "Ready":"True"
	I0920 17:49:54.227408  212266 pod_ready.go:82] duration metric: took 399.167366ms for pod "kube-scheduler-ubuntu-20-agent-2" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:54.227417  212266 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-d5hqz" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:54.429076  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:54.627779  212266 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-d5hqz" in "kube-system" namespace has status "Ready":"True"
	I0920 17:49:54.627802  212266 pod_ready.go:82] duration metric: took 400.377456ms for pod "nvidia-device-plugin-daemonset-d5hqz" in "kube-system" namespace to be "Ready" ...
	I0920 17:49:54.627809  212266 pod_ready.go:39] duration metric: took 8.414656307s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:49:54.627828  212266 api_server.go:52] waiting for apiserver process to appear ...
	I0920 17:49:54.627878  212266 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:49:54.644788  212266 api_server.go:72] duration metric: took 8.964015268s to wait for apiserver process to appear ...
	I0920 17:49:54.644818  212266 api_server.go:88] waiting for apiserver healthz status ...
	I0920 17:49:54.644843  212266 api_server.go:253] Checking apiserver healthz at https://10.138.0.48:8443/healthz ...
	I0920 17:49:54.649453  212266 api_server.go:279] https://10.138.0.48:8443/healthz returned 200:
	ok
	I0920 17:49:54.650474  212266 api_server.go:141] control plane version: v1.31.1
	I0920 17:49:54.650500  212266 api_server.go:131] duration metric: took 5.676415ms to wait for apiserver health ...
	I0920 17:49:54.650508  212266 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 17:49:54.663554  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:54.833797  212266 system_pods.go:59] 16 kube-system pods found
	I0920 17:49:54.833832  212266 system_pods.go:61] "coredns-7c65d6cfc9-cv7dr" [3822af20-9386-4df6-b6e8-b2685b5dd1d3] Running
	I0920 17:49:54.833843  212266 system_pods.go:61] "csi-hostpath-attacher-0" [59e906a5-ae2c-4f8d-b6bc-b27dcf8c9905] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 17:49:54.833853  212266 system_pods.go:61] "csi-hostpath-resizer-0" [aad2e387-bae1-4647-afb4-91c754724ce1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 17:49:54.833866  212266 system_pods.go:61] "csi-hostpathplugin-dh4kg" [367a1756-2147-42e5-9aac-6ac3b95a1704] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 17:49:54.833882  212266 system_pods.go:61] "etcd-ubuntu-20-agent-2" [73b08eb7-0149-40d0-9395-bd6973010471] Running
	I0920 17:49:54.833891  212266 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-2" [5f5aec6f-d8f0-42d1-b641-f64c919ab7f3] Running
	I0920 17:49:54.833898  212266 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-2" [4981020f-84a6-4562-a9fa-d8be1a4221c8] Running
	I0920 17:49:54.833906  212266 system_pods.go:61] "kube-proxy-2g6ds" [8ea9bc46-ffb0-48bd-b675-83c780a9b2ba] Running
	I0920 17:49:54.833911  212266 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-2" [5cd99d75-d7fc-41f2-b440-976e77d82cf7] Running
	I0920 17:49:54.833922  212266 system_pods.go:61] "metrics-server-84c5f94fbc-cgprq" [c013bfda-f03e-47eb-8381-87d1b5bca7c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 17:49:54.833930  212266 system_pods.go:61] "nvidia-device-plugin-daemonset-d5hqz" [bab83d49-0650-4d4c-a8df-8f82d3763b25] Running
	I0920 17:49:54.833938  212266 system_pods.go:61] "registry-66c9cd494c-8wpxs" [52c18ffd-2b22-48ba-9662-b376d4deaec2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 17:49:54.833952  212266 system_pods.go:61] "registry-proxy-jdldd" [37c4212a-8b2f-468a-b4d8-ad804d98aea8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 17:49:54.833964  212266 system_pods.go:61] "snapshot-controller-56fcc65765-fgh5w" [bfbdca52-c550-415e-8ac4-785fccd34f39] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:49:54.833977  212266 system_pods.go:61] "snapshot-controller-56fcc65765-pn44r" [5ff3f32e-8052-4010-ae03-68dea10ae8ad] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:49:54.833986  212266 system_pods.go:61] "storage-provisioner" [db90d410-c800-4c76-9585-f10c4806e3f4] Running
	I0920 17:49:54.833997  212266 system_pods.go:74] duration metric: took 183.481911ms to wait for pod list to return data ...
	I0920 17:49:54.834009  212266 default_sa.go:34] waiting for default service account to be created ...
	I0920 17:49:54.929899  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:55.028627  212266 default_sa.go:45] found service account: "default"
	I0920 17:49:55.028658  212266 default_sa.go:55] duration metric: took 194.638967ms for default service account to be created ...
	I0920 17:49:55.028671  212266 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 17:49:55.163772  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:55.233735  212266 system_pods.go:86] 16 kube-system pods found
	I0920 17:49:55.233769  212266 system_pods.go:89] "coredns-7c65d6cfc9-cv7dr" [3822af20-9386-4df6-b6e8-b2685b5dd1d3] Running
	I0920 17:49:55.233783  212266 system_pods.go:89] "csi-hostpath-attacher-0" [59e906a5-ae2c-4f8d-b6bc-b27dcf8c9905] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 17:49:55.233793  212266 system_pods.go:89] "csi-hostpath-resizer-0" [aad2e387-bae1-4647-afb4-91c754724ce1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 17:49:55.233805  212266 system_pods.go:89] "csi-hostpathplugin-dh4kg" [367a1756-2147-42e5-9aac-6ac3b95a1704] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 17:49:55.233812  212266 system_pods.go:89] "etcd-ubuntu-20-agent-2" [73b08eb7-0149-40d0-9395-bd6973010471] Running
	I0920 17:49:55.233824  212266 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-2" [5f5aec6f-d8f0-42d1-b641-f64c919ab7f3] Running
	I0920 17:49:55.233830  212266 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-2" [4981020f-84a6-4562-a9fa-d8be1a4221c8] Running
	I0920 17:49:55.233841  212266 system_pods.go:89] "kube-proxy-2g6ds" [8ea9bc46-ffb0-48bd-b675-83c780a9b2ba] Running
	I0920 17:49:55.233849  212266 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-2" [5cd99d75-d7fc-41f2-b440-976e77d82cf7] Running
	I0920 17:49:55.233859  212266 system_pods.go:89] "metrics-server-84c5f94fbc-cgprq" [c013bfda-f03e-47eb-8381-87d1b5bca7c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 17:49:55.233868  212266 system_pods.go:89] "nvidia-device-plugin-daemonset-d5hqz" [bab83d49-0650-4d4c-a8df-8f82d3763b25] Running
	I0920 17:49:55.233879  212266 system_pods.go:89] "registry-66c9cd494c-8wpxs" [52c18ffd-2b22-48ba-9662-b376d4deaec2] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 17:49:55.233896  212266 system_pods.go:89] "registry-proxy-jdldd" [37c4212a-8b2f-468a-b4d8-ad804d98aea8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 17:49:55.233907  212266 system_pods.go:89] "snapshot-controller-56fcc65765-fgh5w" [bfbdca52-c550-415e-8ac4-785fccd34f39] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:49:55.233919  212266 system_pods.go:89] "snapshot-controller-56fcc65765-pn44r" [5ff3f32e-8052-4010-ae03-68dea10ae8ad] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:49:55.233928  212266 system_pods.go:89] "storage-provisioner" [db90d410-c800-4c76-9585-f10c4806e3f4] Running
	I0920 17:49:55.233936  212266 system_pods.go:126] duration metric: took 205.258516ms to wait for k8s-apps to be running ...
	I0920 17:49:55.233945  212266 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 17:49:55.233994  212266 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:49:55.247416  212266 system_svc.go:56] duration metric: took 13.464169ms WaitForService to wait for kubelet
	I0920 17:49:55.247442  212266 kubeadm.go:582] duration metric: took 9.566678927s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:49:55.247461  212266 node_conditions.go:102] verifying NodePressure condition ...
	I0920 17:49:55.429115  212266 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0920 17:49:55.429143  212266 node_conditions.go:123] node cpu capacity is 8
	I0920 17:49:55.429156  212266 node_conditions.go:105] duration metric: took 181.69066ms to run NodePressure ...
	I0920 17:49:55.429170  212266 start.go:241] waiting for startup goroutines ...
	I0920 17:49:55.429222  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:55.684621  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:55.928865  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:56.163369  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:56.429600  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:56.663746  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:56.929653  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:57.163413  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:57.429800  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:57.662864  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:57.929743  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:58.163770  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:58.430366  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:58.664476  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:58.929935  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:59.163824  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:59.429264  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:49:59.663606  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:49:59.929484  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:50:00.164120  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:00.429777  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:50:00.675390  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:00.929389  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:50:01.163420  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:01.429975  212266 kapi.go:107] duration metric: took 14.50431961s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 17:50:01.663799  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:02.163542  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:02.663684  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:03.163536  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:03.663862  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:04.163489  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:04.698827  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:05.163424  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:05.663653  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:06.164251  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:06.664075  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:07.163325  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:07.662786  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:08.163383  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:08.663003  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:09.164332  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:09.663198  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:10.207610  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:10.662907  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:11.163602  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:11.664117  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:12.163732  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:12.663476  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:13.167363  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:13.663153  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:14.164063  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:14.664493  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:15.162591  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:15.663880  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:16.163865  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:16.663077  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:17.163715  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:17.663578  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:18.164072  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:18.664013  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:19.164472  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:19.663837  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:20.164062  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:20.663699  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:21.163995  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:21.663682  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:22.164005  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:22.663660  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:23.164484  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:23.662784  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:24.163867  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:50:24.663333  212266 kapi.go:107] duration metric: took 35.504462322s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 17:50:35.062138  212266 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 17:50:35.062161  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:35.561659  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:36.060879  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:36.561230  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:37.061954  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:37.560757  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:38.060948  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:38.561323  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:39.061525  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:39.561763  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:40.062517  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:40.561897  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:41.060996  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:41.561191  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:42.061402  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:42.561072  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:43.061155  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:43.560970  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:44.061431  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:44.561923  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:45.061013  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:45.561477  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:46.060937  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:46.561281  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:47.061414  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:47.561486  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:48.061807  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:48.560605  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:49.061611  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:49.561923  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:50.061472  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:50.561589  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:51.061601  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:51.561386  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:52.061149  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:52.561279  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:53.061702  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:53.561613  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:54.061414  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:54.561633  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:55.061630  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:55.561434  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:56.061683  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:56.561826  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:57.062532  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:57.561544  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:58.062175  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:58.561333  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:59.061271  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:50:59.561417  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:00.062031  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:00.561118  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:01.060863  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:01.562130  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:02.060830  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:02.561164  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:03.061287  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:03.561083  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:04.061122  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:04.561651  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:05.061631  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:05.562142  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:06.061368  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:06.561860  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:07.061264  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:07.561386  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:08.061537  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:08.561858  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:09.061413  212266 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:51:09.561251  212266 kapi.go:107] duration metric: took 1m16.003319709s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 17:51:09.562826  212266 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0920 17:51:09.564493  212266 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 17:51:09.565949  212266 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 17:51:09.567407  212266 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, storage-provisioner, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0920 17:51:09.568705  212266 addons.go:510] duration metric: took 1m23.895392911s for enable addons: enabled=[nvidia-device-plugin cloud-spanner default-storageclass storage-provisioner metrics-server inspektor-gadget yakd storage-provisioner-rancher volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0920 17:51:09.568751  212266 start.go:246] waiting for cluster config update ...
	I0920 17:51:09.568775  212266 start.go:255] writing updated cluster config ...
	I0920 17:51:09.569088  212266 exec_runner.go:51] Run: rm -f paused
	I0920 17:51:09.633664  212266 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 17:51:09.635491  212266 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Fri 2024-08-09 19:32:18 UTC, end at Fri 2024-09-20 18:01:01 UTC. --
	Sep 20 17:52:09 ubuntu-20-agent-2 dockerd[212498]: time="2024-09-20T17:52:09.633297610Z" level=info msg="ignoring event" container=0b03c7ad1563624d77940b1e65c1f6e3f203160b6d6da7285ded040df3ba1b32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:52:25 ubuntu-20-agent-2 dockerd[212498]: time="2024-09-20T17:52:25.206887916Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=4fad965fa7ec14f6 traceID=17df5502a9e9f02bc4662c77884768b3
	Sep 20 17:52:25 ubuntu-20-agent-2 dockerd[212498]: time="2024-09-20T17:52:25.208977511Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=4fad965fa7ec14f6 traceID=17df5502a9e9f02bc4662c77884768b3
	Sep 20 17:53:08 ubuntu-20-agent-2 dockerd[212498]: time="2024-09-20T17:53:08.268603915Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=0c1c0db808051a68 traceID=93036c1c8318323622759fdb3cd5b14d
	Sep 20 17:53:08 ubuntu-20-agent-2 dockerd[212498]: time="2024-09-20T17:53:08.270906157Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=0c1c0db808051a68 traceID=93036c1c8318323622759fdb3cd5b14d
	Sep 20 17:53:22 ubuntu-20-agent-2 cri-dockerd[212827]: time="2024-09-20T17:53:22Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 20 17:53:23 ubuntu-20-agent-2 dockerd[212498]: time="2024-09-20T17:53:23.800558587Z" level=info msg="ignoring event" container=eac9757f111d0422105f7b8a93be853d8948fb9e9339d6a47736106fada01906 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:54:39 ubuntu-20-agent-2 dockerd[212498]: time="2024-09-20T17:54:39.223775381Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=e63f0f30b9d375e1 traceID=dce72bf3607e54172555631c253d5465
	Sep 20 17:54:39 ubuntu-20-agent-2 dockerd[212498]: time="2024-09-20T17:54:39.226069176Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=e63f0f30b9d375e1 traceID=dce72bf3607e54172555631c253d5465
	Sep 20 17:56:12 ubuntu-20-agent-2 cri-dockerd[212827]: time="2024-09-20T17:56:12Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 20 17:56:13 ubuntu-20-agent-2 dockerd[212498]: time="2024-09-20T17:56:13.769186021Z" level=info msg="ignoring event" container=16c3e119a3ac56eed26d6af8a9ce3917789044f7541583d40f05a77131e2e69f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 17:57:24 ubuntu-20-agent-2 dockerd[212498]: time="2024-09-20T17:57:24.223166637Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=35c7010c67600c64 traceID=c0923871f9abc10bf41ff0faf7f62f52
	Sep 20 17:57:24 ubuntu-20-agent-2 dockerd[212498]: time="2024-09-20T17:57:24.225474308Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=35c7010c67600c64 traceID=c0923871f9abc10bf41ff0faf7f62f52
	Sep 20 18:00:01 ubuntu-20-agent-2 cri-dockerd[212827]: time="2024-09-20T18:00:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4e88711907ba6b0ba7a3a8fa350502d53ddfbe90b0a86f0405cfb23515f732dd/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-west1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 20 18:00:01 ubuntu-20-agent-2 dockerd[212498]: time="2024-09-20T18:00:01.675414938Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=69887a26a65f847a traceID=ba3e89f90f86e0f29ecc1a7e1b59c71b
	Sep 20 18:00:01 ubuntu-20-agent-2 dockerd[212498]: time="2024-09-20T18:00:01.677716507Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=69887a26a65f847a traceID=ba3e89f90f86e0f29ecc1a7e1b59c71b
	Sep 20 18:00:13 ubuntu-20-agent-2 dockerd[212498]: time="2024-09-20T18:00:13.221336955Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=3ebd687f9b812f67 traceID=ccbda0bf32e64fde69344ef84b96118a
	Sep 20 18:00:13 ubuntu-20-agent-2 dockerd[212498]: time="2024-09-20T18:00:13.223489654Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=3ebd687f9b812f67 traceID=ccbda0bf32e64fde69344ef84b96118a
	Sep 20 18:00:36 ubuntu-20-agent-2 dockerd[212498]: time="2024-09-20T18:00:36.217089985Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=d5ff422aa7b8f950 traceID=9db30171c0b74aa1b845a7e94225d5bc
	Sep 20 18:00:36 ubuntu-20-agent-2 dockerd[212498]: time="2024-09-20T18:00:36.219263693Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=d5ff422aa7b8f950 traceID=9db30171c0b74aa1b845a7e94225d5bc
	Sep 20 18:01:01 ubuntu-20-agent-2 dockerd[212498]: time="2024-09-20T18:01:01.142187053Z" level=info msg="ignoring event" container=4e88711907ba6b0ba7a3a8fa350502d53ddfbe90b0a86f0405cfb23515f732dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:01:01 ubuntu-20-agent-2 dockerd[212498]: time="2024-09-20T18:01:01.449242243Z" level=info msg="ignoring event" container=a5df325f347812f8cfb0ac496216b90c84811f77614b99b61ed3765e97fc1beb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:01:01 ubuntu-20-agent-2 dockerd[212498]: time="2024-09-20T18:01:01.519371180Z" level=info msg="ignoring event" container=ecfa6270a407b456252ec64a56e8b392b9017dbc28bbb0132639c39ec62901f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:01:01 ubuntu-20-agent-2 dockerd[212498]: time="2024-09-20T18:01:01.610856847Z" level=info msg="ignoring event" container=6bd758c5c544352b83d4cb6a67316265c5b4d31ef5630ffe2b49758b29eb7621 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:01:01 ubuntu-20-agent-2 dockerd[212498]: time="2024-09-20T18:01:01.693546177Z" level=info msg="ignoring event" container=42205eed17a6b33216f4b0923aed8b042067ae651911d64e2bc83556abbe307b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	16c3e119a3ac5       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            4 minutes ago       Exited              gadget                                   6                   900c945370de2       gadget-xbdpm
	eb38a92ff61ed       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   c9442eaf59aa6       gcp-auth-89d5ffd79-sbsvh
	cfeaf32abe08a       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          10 minutes ago      Running             csi-snapshotter                          0                   9f92f4528036c       csi-hostpathplugin-dh4kg
	1d6bfe59e68af       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          10 minutes ago      Running             csi-provisioner                          0                   9f92f4528036c       csi-hostpathplugin-dh4kg
	ca636e0924b59       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            10 minutes ago      Running             liveness-probe                           0                   9f92f4528036c       csi-hostpathplugin-dh4kg
	b3e01b29654a0       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           10 minutes ago      Running             hostpath                                 0                   9f92f4528036c       csi-hostpathplugin-dh4kg
	8f215498b10c8       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                10 minutes ago      Running             node-driver-registrar                    0                   9f92f4528036c       csi-hostpathplugin-dh4kg
	314a939748a8b       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   10 minutes ago      Running             csi-external-health-monitor-controller   0                   9f92f4528036c       csi-hostpathplugin-dh4kg
	d13deee5631d3       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              10 minutes ago      Running             csi-resizer                              0                   b6fa9368ec62d       csi-hostpath-resizer-0
	6710ebd90c3ed       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             10 minutes ago      Running             csi-attacher                             0                   b1fc7b953abd0       csi-hostpath-attacher-0
	57ddc802a9fc2       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   c991b0e99df0c       snapshot-controller-56fcc65765-pn44r
	c3e0de92989e8       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   22da85ff33de5       snapshot-controller-56fcc65765-fgh5w
	7ac74bdcfd5a6       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       10 minutes ago      Running             local-path-provisioner                   0                   d77a7c2c365a0       local-path-provisioner-86d989889c-8wh7c
	b9521982deb59       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        10 minutes ago      Running             yakd                                     0                   c56afe8500d78       yakd-dashboard-67d98fc6b-dv4nk
	11bf97fc45adf       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        11 minutes ago      Running             metrics-server                           0                   4210ce47510b5       metrics-server-84c5f94fbc-cgprq
	ecfa6270a407b       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              11 minutes ago      Exited              registry-proxy                           0                   42205eed17a6b       registry-proxy-jdldd
	a5df325f34781       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                                             11 minutes ago      Exited              registry                                 0                   6bd758c5c5443       registry-66c9cd494c-8wpxs
	2f214b8458ec5       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               11 minutes ago      Running             cloud-spanner-emulator                   0                   991041e249c49       cloud-spanner-emulator-769b77f747-8xvn4
	f9835b9149bc8       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     11 minutes ago      Running             nvidia-device-plugin-ctr                 0                   b395ec01e00c4       nvidia-device-plugin-daemonset-d5hqz
	447ec91f65de7       6e38f40d628db                                                                                                                                11 minutes ago      Running             storage-provisioner                      0                   d497fa7ae96a7       storage-provisioner
	1116152da2a87       c69fa2e9cbf5f                                                                                                                                11 minutes ago      Running             coredns                                  0                   30c6f89a9da73       coredns-7c65d6cfc9-cv7dr
	c1fd0bf0148a7       60c005f310ff3                                                                                                                                11 minutes ago      Running             kube-proxy                               0                   25f8c8f3c1389       kube-proxy-2g6ds
	f3e925bf08ccd       2e96e5913fc06                                                                                                                                11 minutes ago      Running             etcd                                     0                   3176dec4275ab       etcd-ubuntu-20-agent-2
	2352ef03642d1       6bab7719df100                                                                                                                                11 minutes ago      Running             kube-apiserver                           0                   13a77b021944b       kube-apiserver-ubuntu-20-agent-2
	5d8aa5fc85837       9aa1fad941575                                                                                                                                11 minutes ago      Running             kube-scheduler                           0                   bf0250b7ae2cc       kube-scheduler-ubuntu-20-agent-2
	3a5906ac5736f       175ffd71cce3d                                                                                                                                11 minutes ago      Running             kube-controller-manager                  0                   6f0dd1c6fab8d       kube-controller-manager-ubuntu-20-agent-2
	
	
	==> coredns [1116152da2a8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	[INFO] Reloading complete
	[INFO] 127.0.0.1:51197 - 61291 "HINFO IN 425421406788752099.7349578684880644430. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.026257812s
	[INFO] 10.244.0.23:36096 - 57800 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000301932s
	[INFO] 10.244.0.23:59039 - 28496 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000422119s
	[INFO] 10.244.0.23:57957 - 9888 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124747s
	[INFO] 10.244.0.23:55302 - 33807 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000129499s
	[INFO] 10.244.0.23:40510 - 3510 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101846s
	[INFO] 10.244.0.23:50109 - 52664 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000164847s
	[INFO] 10.244.0.23:35236 - 774 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.00224583s
	[INFO] 10.244.0.23:42225 - 1706 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.003256911s
	[INFO] 10.244.0.23:56144 - 7681 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.001846939s
	[INFO] 10.244.0.23:42127 - 15707 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.002360967s
	[INFO] 10.244.0.23:47767 - 57675 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.002014735s
	[INFO] 10.244.0.23:48837 - 48190 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003458364s
	[INFO] 10.244.0.23:38426 - 20356 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001179609s
	[INFO] 10.244.0.23:36233 - 4977 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001617058s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-2
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-2
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T17_49_40_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-2
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-2"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:49:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-2
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:00:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 17:56:49 +0000   Fri, 20 Sep 2024 17:49:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 17:56:49 +0000   Fri, 20 Sep 2024 17:49:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 17:56:49 +0000   Fri, 20 Sep 2024 17:49:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 17:56:49 +0000   Fri, 20 Sep 2024 17:49:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.138.0.48
	  Hostname:    ubuntu-20-agent-2
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                1ec29a5c-5f40-e854-ccac-68a60c2524db
	  Boot ID:                    0fd695e7-50c5-4838-9acc-b2d1cdaf04a4
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  default                     cloud-spanner-emulator-769b77f747-8xvn4      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-xbdpm                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-sbsvh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-cv7dr                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-dh4kg                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ubuntu-20-agent-2                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kube-apiserver-ubuntu-20-agent-2             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-2    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-2g6ds                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ubuntu-20-agent-2             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-cgprq              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-d5hqz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-fgh5w         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-pn44r         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-8wh7c      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-dv4nk               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x7 over 11m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                kubelet          Node ubuntu-20-agent-2 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ubuntu-20-agent-2 event: Registered Node ubuntu-20-agent-2 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ea 14 b1 ac b7 0d 08 06
	[  +1.002929] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3a 38 df d8 bf fc 08 06
	[  +0.031068] IPv4: martian source 10.244.0.1 from 10.244.0.12, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 22 0e 75 17 ca 02 08 06
	[  +2.574059] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 4a 0d aa 64 98 08 06
	[  +1.642066] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff fa 23 90 1f dc 95 08 06
	[  +1.999033] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a e0 6d d5 4f 0f 08 06
	[  +4.532440] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 cc 27 ef 5a 8a 08 06
	[  +0.289418] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2a fc d3 7c dc 34 08 06
	[  +0.074145] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 ae 5e 4f 83 7c 08 06
	[ +36.701044] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 0a 8d 8f 7b ad 22 08 06
	[  +0.030815] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 26 6e 38 9f 1e 7e 08 06
	[Sep20 17:51] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa 88 b7 cf a4 b1 08 06
	[  +0.000524] IPv4: martian source 10.244.0.23 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff da 60 f4 1a 64 5a 08 06
	
	
	==> etcd [f3e925bf08cc] <==
	{"level":"info","ts":"2024-09-20T17:49:36.467236Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-20T17:49:36.467247Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"10.138.0.48:2380"}
	{"level":"info","ts":"2024-09-20T17:49:36.556852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-20T17:49:36.556910Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-20T17:49:36.556939Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgPreVoteResp from 6b435b960bec7c3c at term 1"}
	{"level":"info","ts":"2024-09-20T17:49:36.556956Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became candidate at term 2"}
	{"level":"info","ts":"2024-09-20T17:49:36.556965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c received MsgVoteResp from 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-20T17:49:36.556976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b435b960bec7c3c became leader at term 2"}
	{"level":"info","ts":"2024-09-20T17:49:36.556990Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b435b960bec7c3c elected leader 6b435b960bec7c3c at term 2"}
	{"level":"info","ts":"2024-09-20T17:49:36.557795Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:49:36.558412Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T17:49:36.558409Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b435b960bec7c3c","local-member-attributes":"{Name:ubuntu-20-agent-2 ClientURLs:[https://10.138.0.48:2379]}","request-path":"/0/members/6b435b960bec7c3c/attributes","cluster-id":"548dac8640a5bdf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T17:49:36.558442Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T17:49:36.558598Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T17:49:36.558633Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T17:49:36.558804Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"548dac8640a5bdf4","local-member-id":"6b435b960bec7c3c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:49:36.558891Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:49:36.558924Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:49:36.559700Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T17:49:36.559764Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T17:49:36.561023Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.138.0.48:2379"}
	{"level":"info","ts":"2024-09-20T17:49:36.561030Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T17:59:36.786096Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1725}
	{"level":"info","ts":"2024-09-20T17:59:36.809982Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1725,"took":"23.304006ms","hash":1670425455,"current-db-size-bytes":8409088,"current-db-size":"8.4 MB","current-db-size-in-use-bytes":4366336,"current-db-size-in-use":"4.4 MB"}
	{"level":"info","ts":"2024-09-20T17:59:36.810033Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1670425455,"revision":1725,"compact-revision":-1}
	
	
	==> gcp-auth [eb38a92ff61e] <==
	2024/09/20 17:51:09 GCP Auth Webhook started!
	2024/09/20 17:51:25 Ready to marshal response ...
	2024/09/20 17:51:25 Ready to write response ...
	2024/09/20 17:51:26 Ready to marshal response ...
	2024/09/20 17:51:26 Ready to write response ...
	2024/09/20 17:51:49 Ready to marshal response ...
	2024/09/20 17:51:49 Ready to write response ...
	2024/09/20 17:51:49 Ready to marshal response ...
	2024/09/20 17:51:49 Ready to write response ...
	2024/09/20 17:51:49 Ready to marshal response ...
	2024/09/20 17:51:49 Ready to write response ...
	2024/09/20 18:00:01 Ready to marshal response ...
	2024/09/20 18:00:01 Ready to write response ...
	
	
	==> kernel <==
	 18:01:02 up  1:43,  0 users,  load average: 0.08, 0.29, 0.58
	Linux ubuntu-20-agent-2 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [2352ef03642d] <==
	W0920 17:50:28.016120       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.209.79:443: connect: connection refused
	W0920 17:50:34.567020       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.249.169:443: connect: connection refused
	E0920 17:50:34.567067       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.249.169:443: connect: connection refused" logger="UnhandledError"
	W0920 17:50:56.580625       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.249.169:443: connect: connection refused
	E0920 17:50:56.580667       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.249.169:443: connect: connection refused" logger="UnhandledError"
	W0920 17:50:56.586481       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.249.169:443: connect: connection refused
	E0920 17:50:56.586514       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.249.169:443: connect: connection refused" logger="UnhandledError"
	I0920 17:51:25.884534       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0920 17:51:25.902849       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0920 17:51:39.279863       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0920 17:51:39.292974       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0920 17:51:39.386866       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0920 17:51:39.410789       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0920 17:51:39.411132       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0920 17:51:39.457568       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0920 17:51:39.583467       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0920 17:51:39.589697       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0920 17:51:39.613347       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0920 17:51:40.425354       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0920 17:51:40.458470       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0920 17:51:40.505702       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0920 17:51:40.600205       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0920 17:51:40.614117       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0920 17:51:40.678123       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0920 17:51:40.805310       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	
	
	==> kube-controller-manager [3a5906ac5736] <==
	W0920 17:59:51.181631       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:59:51.181674       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:59:55.404106       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:59:55.404153       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:59:57.147902       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:59:57.147952       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 17:59:59.982345       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 17:59:59.982388       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:00:13.839219       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:00:13.839263       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:00:14.352241       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:00:14.352289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:00:20.440124       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:00:20.440167       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:00:32.976431       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:00:32.976486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:00:33.195554       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:00:33.195608       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:00:38.143050       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:00:38.143094       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:00:45.179664       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:00:45.179707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 18:01:01.398722       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="6.125µs"
	W0920 18:01:01.854652       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:01:01.854709       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [c1fd0bf0148a] <==
	I0920 17:49:45.558232       1 server_linux.go:66] "Using iptables proxy"
	I0920 17:49:45.656167       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.138.0.48"]
	E0920 17:49:45.656234       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 17:49:45.688042       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0920 17:49:45.688114       1 server_linux.go:169] "Using iptables Proxier"
	I0920 17:49:45.694351       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 17:49:45.694878       1 server.go:483] "Version info" version="v1.31.1"
	I0920 17:49:45.694918       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:49:45.696513       1 config.go:105] "Starting endpoint slice config controller"
	I0920 17:49:45.696643       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 17:49:45.696831       1 config.go:199] "Starting service config controller"
	I0920 17:49:45.696840       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 17:49:45.697353       1 config.go:328] "Starting node config controller"
	I0920 17:49:45.697370       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 17:49:45.797168       1 shared_informer.go:320] Caches are synced for service config
	I0920 17:49:45.797182       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 17:49:45.797430       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5d8aa5fc8583] <==
	W0920 17:49:37.694024       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 17:49:37.694185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:49:37.694202       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0920 17:49:37.694205       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 17:49:37.694223       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:49:37.694232       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 17:49:37.694238       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0920 17:49:37.694255       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:49:37.694059       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 17:49:37.694283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:49:37.694119       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 17:49:37.694328       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:49:37.694204       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 17:49:37.694360       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:49:38.623624       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 17:49:38.623667       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:49:38.693760       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 17:49:38.693798       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:49:38.778392       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 17:49:38.778449       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 17:49:38.787849       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 17:49:38.787904       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:49:38.804503       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 17:49:38.804752       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 17:49:40.791888       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Fri 2024-08-09 19:32:18 UTC, end at Fri 2024-09-20 18:01:02 UTC. --
	Sep 20 18:00:36 ubuntu-20-agent-2 kubelet[213714]: E0920 18:00:36.068658  213714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-xbdpm_gadget(6c57d252-f259-4893-a8e3-6d066f13a81c)\"" pod="gadget/gadget-xbdpm" podUID="6c57d252-f259-4893-a8e3-6d066f13a81c"
	Sep 20 18:00:36 ubuntu-20-agent-2 kubelet[213714]: E0920 18:00:36.219775  213714 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" image="gcr.io/k8s-minikube/busybox:latest"
	Sep 20 18:00:36 ubuntu-20-agent-2 kubelet[213714]: E0920 18:00:36.219974  213714 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-test,Image:gcr.io/k8s-minikube/busybox,Command:[],Args:[sh -c wget --spider -S http://registry.kube-system.svc.cluster.local],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qrvdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:true,StdinOnce:true,TTY:true,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod registry-test_default(91818f30-f633-421f-b834-af1e2de1e5b4): ErrImagePull: Error response from daemon: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" logger="UnhandledError"
	Sep 20 18:00:36 ubuntu-20-agent-2 kubelet[213714]: E0920 18:00:36.221182  213714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ErrImagePull: \"Error response from daemon: Head \\\"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\\\": unauthorized: authentication failed\"" pod="default/registry-test" podUID="91818f30-f633-421f-b834-af1e2de1e5b4"
	Sep 20 18:00:40 ubuntu-20-agent-2 kubelet[213714]: E0920 18:00:40.072299  213714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="1ed6128c-fe5f-48a6-a89b-2811e7057fb0"
	Sep 20 18:00:47 ubuntu-20-agent-2 kubelet[213714]: I0920 18:00:47.067398  213714 scope.go:117] "RemoveContainer" containerID="16c3e119a3ac56eed26d6af8a9ce3917789044f7541583d40f05a77131e2e69f"
	Sep 20 18:00:47 ubuntu-20-agent-2 kubelet[213714]: E0920 18:00:47.067583  213714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-xbdpm_gadget(6c57d252-f259-4893-a8e3-6d066f13a81c)\"" pod="gadget/gadget-xbdpm" podUID="6c57d252-f259-4893-a8e3-6d066f13a81c"
	Sep 20 18:00:48 ubuntu-20-agent-2 kubelet[213714]: E0920 18:00:48.070589  213714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="91818f30-f633-421f-b834-af1e2de1e5b4"
	Sep 20 18:00:55 ubuntu-20-agent-2 kubelet[213714]: E0920 18:00:55.070507  213714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="1ed6128c-fe5f-48a6-a89b-2811e7057fb0"
	Sep 20 18:00:59 ubuntu-20-agent-2 kubelet[213714]: E0920 18:00:59.069965  213714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="91818f30-f633-421f-b834-af1e2de1e5b4"
	Sep 20 18:01:01 ubuntu-20-agent-2 kubelet[213714]: I0920 18:01:01.068265  213714 scope.go:117] "RemoveContainer" containerID="16c3e119a3ac56eed26d6af8a9ce3917789044f7541583d40f05a77131e2e69f"
	Sep 20 18:01:01 ubuntu-20-agent-2 kubelet[213714]: E0920 18:01:01.068524  213714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-xbdpm_gadget(6c57d252-f259-4893-a8e3-6d066f13a81c)\"" pod="gadget/gadget-xbdpm" podUID="6c57d252-f259-4893-a8e3-6d066f13a81c"
	Sep 20 18:01:01 ubuntu-20-agent-2 kubelet[213714]: I0920 18:01:01.354219  213714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrvdb\" (UniqueName: \"kubernetes.io/projected/91818f30-f633-421f-b834-af1e2de1e5b4-kube-api-access-qrvdb\") pod \"91818f30-f633-421f-b834-af1e2de1e5b4\" (UID: \"91818f30-f633-421f-b834-af1e2de1e5b4\") "
	Sep 20 18:01:01 ubuntu-20-agent-2 kubelet[213714]: I0920 18:01:01.354289  213714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/91818f30-f633-421f-b834-af1e2de1e5b4-gcp-creds\") pod \"91818f30-f633-421f-b834-af1e2de1e5b4\" (UID: \"91818f30-f633-421f-b834-af1e2de1e5b4\") "
	Sep 20 18:01:01 ubuntu-20-agent-2 kubelet[213714]: I0920 18:01:01.354392  213714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91818f30-f633-421f-b834-af1e2de1e5b4-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "91818f30-f633-421f-b834-af1e2de1e5b4" (UID: "91818f30-f633-421f-b834-af1e2de1e5b4"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 20 18:01:01 ubuntu-20-agent-2 kubelet[213714]: I0920 18:01:01.356454  213714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91818f30-f633-421f-b834-af1e2de1e5b4-kube-api-access-qrvdb" (OuterVolumeSpecName: "kube-api-access-qrvdb") pod "91818f30-f633-421f-b834-af1e2de1e5b4" (UID: "91818f30-f633-421f-b834-af1e2de1e5b4"). InnerVolumeSpecName "kube-api-access-qrvdb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 18:01:01 ubuntu-20-agent-2 kubelet[213714]: I0920 18:01:01.454631  213714 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qrvdb\" (UniqueName: \"kubernetes.io/projected/91818f30-f633-421f-b834-af1e2de1e5b4-kube-api-access-qrvdb\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 20 18:01:01 ubuntu-20-agent-2 kubelet[213714]: I0920 18:01:01.454668  213714 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/91818f30-f633-421f-b834-af1e2de1e5b4-gcp-creds\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 20 18:01:01 ubuntu-20-agent-2 kubelet[213714]: I0920 18:01:01.856972  213714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdgtc\" (UniqueName: \"kubernetes.io/projected/37c4212a-8b2f-468a-b4d8-ad804d98aea8-kube-api-access-sdgtc\") pod \"37c4212a-8b2f-468a-b4d8-ad804d98aea8\" (UID: \"37c4212a-8b2f-468a-b4d8-ad804d98aea8\") "
	Sep 20 18:01:01 ubuntu-20-agent-2 kubelet[213714]: I0920 18:01:01.857037  213714 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wj5s\" (UniqueName: \"kubernetes.io/projected/52c18ffd-2b22-48ba-9662-b376d4deaec2-kube-api-access-5wj5s\") pod \"52c18ffd-2b22-48ba-9662-b376d4deaec2\" (UID: \"52c18ffd-2b22-48ba-9662-b376d4deaec2\") "
	Sep 20 18:01:01 ubuntu-20-agent-2 kubelet[213714]: I0920 18:01:01.859306  213714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37c4212a-8b2f-468a-b4d8-ad804d98aea8-kube-api-access-sdgtc" (OuterVolumeSpecName: "kube-api-access-sdgtc") pod "37c4212a-8b2f-468a-b4d8-ad804d98aea8" (UID: "37c4212a-8b2f-468a-b4d8-ad804d98aea8"). InnerVolumeSpecName "kube-api-access-sdgtc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 18:01:01 ubuntu-20-agent-2 kubelet[213714]: I0920 18:01:01.859325  213714 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52c18ffd-2b22-48ba-9662-b376d4deaec2-kube-api-access-5wj5s" (OuterVolumeSpecName: "kube-api-access-5wj5s") pod "52c18ffd-2b22-48ba-9662-b376d4deaec2" (UID: "52c18ffd-2b22-48ba-9662-b376d4deaec2"). InnerVolumeSpecName "kube-api-access-5wj5s". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 18:01:01 ubuntu-20-agent-2 kubelet[213714]: I0920 18:01:01.958036  213714 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-sdgtc\" (UniqueName: \"kubernetes.io/projected/37c4212a-8b2f-468a-b4d8-ad804d98aea8-kube-api-access-sdgtc\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 20 18:01:01 ubuntu-20-agent-2 kubelet[213714]: I0920 18:01:01.958073  213714 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5wj5s\" (UniqueName: \"kubernetes.io/projected/52c18ffd-2b22-48ba-9662-b376d4deaec2-kube-api-access-5wj5s\") on node \"ubuntu-20-agent-2\" DevicePath \"\""
	Sep 20 18:01:02 ubuntu-20-agent-2 kubelet[213714]: I0920 18:01:02.078508  213714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91818f30-f633-421f-b834-af1e2de1e5b4" path="/var/lib/kubelet/pods/91818f30-f633-421f-b834-af1e2de1e5b4/volumes"
	
	
	==> storage-provisioner [447ec91f65de] <==
	I0920 17:49:47.866914       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 17:49:47.876915       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 17:49:47.876957       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 17:49:47.885604       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 17:49:47.886011       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9b68d87e-dea6-4e5e-9d1b-7ee0b01fcded", APIVersion:"v1", ResourceVersion:"623", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-2_3b9f8ae2-42b3-41f9-aa61-07f068ce6a3e became leader
	I0920 17:49:47.886044       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_3b9f8ae2-42b3-41f9-aa61-07f068ce6a3e!
	I0920 17:49:47.986745       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-2_3b9f8ae2-42b3-41f9-aa61-07f068ce6a3e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ubuntu-20-agent-2/10.138.0.48
	Start Time:       Fri, 20 Sep 2024 17:51:49 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mlcxs (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-mlcxs:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m13s                  default-scheduler  Successfully assigned default/busybox to ubuntu-20-agent-2
	  Normal   Pulling    7m54s (x4 over 9m13s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m54s (x4 over 9m13s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m54s (x4 over 9m13s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m27s (x6 over 9m12s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m5s (x21 over 9m12s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.93s)

                                                
                                    

Test pass (110/167)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 1.27
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 0.99
15 TestDownloadOnly/v1.31.1/binaries 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.55
22 TestOffline 38.42
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 101.81
29 TestAddons/serial/Volcano 39.42
31 TestAddons/serial/GCPAuth/Namespaces 0.12
35 TestAddons/parallel/InspektorGadget 10.48
36 TestAddons/parallel/MetricsServer 5.38
38 TestAddons/parallel/CSI 52.66
39 TestAddons/parallel/Headlamp 15.9
40 TestAddons/parallel/CloudSpanner 5.26
42 TestAddons/parallel/NvidiaDevicePlugin 5.25
43 TestAddons/parallel/Yakd 10.43
44 TestAddons/StoppedEnableDisable 10.7
46 TestCertExpiration 228.7
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 29.64
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 25.73
61 TestFunctional/serial/KubeContext 0.04
62 TestFunctional/serial/KubectlGetPods 0.06
64 TestFunctional/serial/MinikubeKubectlCmd 0.1
65 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
66 TestFunctional/serial/ExtraConfig 33.24
67 TestFunctional/serial/ComponentHealth 0.07
68 TestFunctional/serial/LogsCmd 0.77
69 TestFunctional/serial/LogsFileCmd 0.79
70 TestFunctional/serial/InvalidService 4.36
72 TestFunctional/parallel/ConfigCmd 0.27
73 TestFunctional/parallel/DashboardCmd 6.84
74 TestFunctional/parallel/DryRun 0.15
75 TestFunctional/parallel/InternationalLanguage 0.08
76 TestFunctional/parallel/StatusCmd 0.41
79 TestFunctional/parallel/ProfileCmd/profile_not_create 0.21
80 TestFunctional/parallel/ProfileCmd/profile_list 0.2
81 TestFunctional/parallel/ProfileCmd/profile_json_output 0.2
83 TestFunctional/parallel/ServiceCmd/DeployApp 9.14
84 TestFunctional/parallel/ServiceCmd/List 0.33
85 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
86 TestFunctional/parallel/ServiceCmd/HTTPS 0.15
87 TestFunctional/parallel/ServiceCmd/Format 0.15
88 TestFunctional/parallel/ServiceCmd/URL 0.14
89 TestFunctional/parallel/ServiceCmdConnect 7.31
90 TestFunctional/parallel/AddonsCmd 0.12
91 TestFunctional/parallel/PersistentVolumeClaim 21.59
94 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.26
95 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
97 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.18
98 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
99 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
103 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
106 TestFunctional/parallel/MySQL 21.23
110 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
111 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 14.02
112 TestFunctional/parallel/UpdateContextCmd/no_clusters 14.57
115 TestFunctional/parallel/NodeLabels 0.06
119 TestFunctional/parallel/Version/short 0.05
120 TestFunctional/parallel/Version/components 0.22
121 TestFunctional/parallel/License 0.2
122 TestFunctional/delete_echo-server_images 0.03
123 TestFunctional/delete_my-image_image 0.01
124 TestFunctional/delete_minikube_cached_images 0.01
129 TestImageBuild/serial/Setup 13.69
130 TestImageBuild/serial/NormalBuild 1.61
131 TestImageBuild/serial/BuildWithBuildArg 0.8
132 TestImageBuild/serial/BuildWithDockerIgnore 0.58
133 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.59
137 TestJSONOutput/start/Command 27.89
138 TestJSONOutput/start/Audit 0
140 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
141 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
143 TestJSONOutput/pause/Command 0.5
144 TestJSONOutput/pause/Audit 0
146 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
147 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
149 TestJSONOutput/unpause/Command 0.4
150 TestJSONOutput/unpause/Audit 0
152 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
155 TestJSONOutput/stop/Command 10.41
156 TestJSONOutput/stop/Audit 0
158 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
160 TestErrorJSONOutput 0.2
165 TestMainNoArgs 0.05
166 TestMinikubeProfile 34.1
174 TestPause/serial/Start 27.88
175 TestPause/serial/SecondStartNoReconfiguration 29.53
176 TestPause/serial/Pause 0.47
177 TestPause/serial/VerifyStatus 0.13
178 TestPause/serial/Unpause 0.37
179 TestPause/serial/PauseAgain 0.52
180 TestPause/serial/DeletePaused 1.75
181 TestPause/serial/VerifyDeletedResources 0.07
195 TestRunningBinaryUpgrade 65.64
197 TestStoppedBinaryUpgrade/Setup 0.42
198 TestStoppedBinaryUpgrade/Upgrade 49.34
199 TestStoppedBinaryUpgrade/MinikubeLogs 0.79
200 TestKubernetesUpgrade 305.56
x
+
TestDownloadOnly/v1.20.0/json-events (1.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (1.268428082s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (1.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (60.260041ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 20 Sep 24 17:48 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:48:45
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:48:45.709962  208703 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:48:45.710091  208703 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:48:45.710104  208703 out.go:358] Setting ErrFile to fd 2...
	I0920 17:48:45.710110  208703 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:48:45.710368  208703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-201891/.minikube/bin
	W0920 17:48:45.710597  208703 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19678-201891/.minikube/config/config.json: open /home/jenkins/minikube-integration/19678-201891/.minikube/config/config.json: no such file or directory
	I0920 17:48:45.711208  208703 out.go:352] Setting JSON to true
	I0920 17:48:45.712224  208703 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5478,"bootTime":1726849048,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:48:45.712361  208703 start.go:139] virtualization: kvm guest
	I0920 17:48:45.714874  208703 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0920 17:48:45.714979  208703 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19678-201891/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 17:48:45.715020  208703 notify.go:220] Checking for updates...
	I0920 17:48:45.716436  208703 out.go:169] MINIKUBE_LOCATION=19678
	I0920 17:48:45.717918  208703 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:48:45.719244  208703 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19678-201891/kubeconfig
	I0920 17:48:45.720655  208703 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-201891/.minikube
	I0920 17:48:45.721932  208703 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (0.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm
--- PASS: TestDownloadOnly/v1.31.1/json-events (0.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
--- PASS: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (59.368187ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 20 Sep 24 17:48 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.34.0 | 20 Sep 24 17:48 UTC | 20 Sep 24 17:48 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 20 Sep 24 17:48 UTC | 20 Sep 24 17:48 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 20 Sep 24 17:48 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:48:47
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:48:47.273711  208859 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:48:47.273983  208859 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:48:47.273993  208859 out.go:358] Setting ErrFile to fd 2...
	I0920 17:48:47.273998  208859 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:48:47.274187  208859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-201891/.minikube/bin
	I0920 17:48:47.274782  208859 out.go:352] Setting JSON to true
	I0920 17:48:47.275675  208859 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5479,"bootTime":1726849048,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:48:47.275791  208859 start.go:139] virtualization: kvm guest
	I0920 17:48:47.277854  208859 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0920 17:48:47.277986  208859 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19678-201891/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 17:48:47.278037  208859 notify.go:220] Checking for updates...
	I0920 17:48:47.279390  208859 out.go:169] MINIKUBE_LOCATION=19678
	I0920 17:48:47.280914  208859 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:48:47.282360  208859 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19678-201891/kubeconfig
	I0920 17:48:47.283649  208859 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-201891/.minikube
	I0920 17:48:47.284939  208859 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
I0920 17:48:48.769292  208691 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:40073 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestOffline (38.42s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (36.856075283s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.559770486s)
--- PASS: TestOffline (38.42s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (48.474955ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (48.701485ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (101.81s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm: (1m41.809561789s)
--- PASS: TestAddons/Setup (101.81s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.42s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 8.684462ms
addons_test.go:843: volcano-admission stabilized in 8.754438ms
addons_test.go:835: volcano-scheduler stabilized in 8.818065ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-cc787" [6be78759-b997-468b-b5cd-e8b7d22c33c7] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003968196s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-5wkl2" [65567034-0920-48b5-917e-2217f0a16458] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004345304s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-wdx9v" [dff96084-51c0-4dbf-8fdd-3a695c0cf9bc] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004403485s
addons_test.go:870: (dbg) Run:  kubectl --context minikube delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context minikube create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [251cf0b7-008a-462a-a729-ac0d143a0f8d] Pending
helpers_test.go:344: "test-job-nginx-0" [251cf0b7-008a-462a-a729-ac0d143a0f8d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [251cf0b7-008a-462a-a729-ac0d143a0f8d] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004065652s
addons_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.110210852s)
--- PASS: TestAddons/serial/Volcano (39.42s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.48s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-xbdpm" [6c57d252-f259-4893-a8e3-6d066f13a81c] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00392953s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube: (5.473477361s)
--- PASS: TestAddons/parallel/InspektorGadget (10.48s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.38s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.019398ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-cgprq" [c013bfda-f03e-47eb-8381-87d1b5bca7c9] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004128126s
addons_test.go:413: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.38s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
I0920 18:01:18.668893  208691 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 18:01:18.673338  208691 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 18:01:18.673365  208691 kapi.go:107] duration metric: took 4.507114ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 4.515498ms
addons_test.go:508: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4a0f9189-3b90-49fe-93a5-f7fbe3ab1d95] Pending
helpers_test.go:344: "task-pv-pod" [4a0f9189-3b90-49fe-93a5-f7fbe3ab1d95] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4a0f9189-3b90-49fe-93a5-f7fbe3ab1d95] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003912248s
addons_test.go:528: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context minikube delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6378b403-88cc-47b6-8f61-a713624b7559] Pending
helpers_test.go:344: "task-pv-pod-restore" [6378b403-88cc-47b6-8f61-a713624b7559] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6378b403-88cc-47b6-8f61-a713624b7559] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003964234s
addons_test.go:570: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context minikube delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context minikube delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.28325127s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (52.66s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-6cjdz" [f930eeea-e87f-4bdb-87af-dcc54392f160] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-6cjdz" [f930eeea-e87f-4bdb-87af-dcc54392f160] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003776955s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1: (5.411189239s)
--- PASS: TestAddons/parallel/Headlamp (15.90s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-8xvn4" [161f1939-10ef-4ea3-b14d-891e422efb95] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003412969s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-d5hqz" [bab83d49-0650-4d4c-a8df-8f82d3763b25] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004073995s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p minikube
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-dv4nk" [bbde732e-ed99-463c-b829-a76617c3b0a2] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003554679s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.426673473s)
--- PASS: TestAddons/parallel/Yakd (10.43s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.7s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.392741774s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.70s)

                                                
                                    
x
+
TestCertExpiration (228.7s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (14.042606529s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (33.147675161s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.505334432s)
--- PASS: TestCertExpiration (228.70s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19678-201891/.minikube/files/etc/test/nested/copy/208691/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (29.64s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (29.639691847s)
--- PASS: TestFunctional/serial/StartWithProxy (29.64s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (25.73s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0920 18:07:18.372579  208691 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (25.73165019s)
functional_test.go:663: soft start took 25.732365494s for "minikube" cluster.
I0920 18:07:44.104615  208691 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (25.73s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.24s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.241980948s)
functional_test.go:761: restart took 33.242105807s for "minikube" cluster.
I0920 18:08:17.660422  208691 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (33.24s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.77s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd2545082278/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.79s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.36s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (155.735972ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://10.138.0.48:30305 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context minikube delete -f testdata/invalidsvc.yaml: (1.039670799s)
--- PASS: TestFunctional/serial/InvalidService (4.36s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (42.400613ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (42.24088ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2024/09/20 18:08:30 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 243732: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.84s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (78.595246ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-201891/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-201891/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:08:30.791281  244104 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:08:30.791398  244104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:08:30.791407  244104 out.go:358] Setting ErrFile to fd 2...
	I0920 18:08:30.791411  244104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:08:30.791589  244104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-201891/.minikube/bin
	I0920 18:08:30.792122  244104 out.go:352] Setting JSON to false
	I0920 18:08:30.793121  244104 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6663,"bootTime":1726849048,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:08:30.793232  244104 start.go:139] virtualization: kvm guest
	I0920 18:08:30.795352  244104 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:08:30.796568  244104 out.go:177]   - MINIKUBE_LOCATION=19678
	W0920 18:08:30.796602  244104 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19678-201891/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 18:08:30.796648  244104 notify.go:220] Checking for updates...
	I0920 18:08:30.798917  244104 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:08:30.800129  244104 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-201891/kubeconfig
	I0920 18:08:30.801401  244104 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-201891/.minikube
	I0920 18:08:30.802568  244104 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:08:30.803847  244104 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:08:30.805651  244104 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 18:08:30.805978  244104 exec_runner.go:51] Run: systemctl --version
	I0920 18:08:30.808516  244104 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:08:30.819905  244104 out.go:177] * Using the none driver based on existing profile
	I0920 18:08:30.821162  244104 start.go:297] selected driver: none
	I0920 18:08:30.821176  244104 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:08:30.821287  244104 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:08:30.821309  244104 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0920 18:08:30.821594  244104 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0920 18:08:30.823844  244104 out.go:201] 
	W0920 18:08:30.825142  244104 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 18:08:30.826257  244104 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (81.239838ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-201891/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-201891/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:08:30.948063  244134 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:08:30.948174  244134 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:08:30.948183  244134 out.go:358] Setting ErrFile to fd 2...
	I0920 18:08:30.948186  244134 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:08:30.948466  244134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-201891/.minikube/bin
	I0920 18:08:30.949011  244134 out.go:352] Setting JSON to false
	I0920 18:08:30.949953  244134 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6663,"bootTime":1726849048,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:08:30.950063  244134 start.go:139] virtualization: kvm guest
	I0920 18:08:30.951927  244134 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	W0920 18:08:30.953250  244134 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19678-201891/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 18:08:30.953271  244134 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 18:08:30.953352  244134 notify.go:220] Checking for updates...
	I0920 18:08:30.955526  244134 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:08:30.956748  244134 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-201891/kubeconfig
	I0920 18:08:30.958079  244134 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-201891/.minikube
	I0920 18:08:30.959452  244134 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:08:30.960721  244134 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:08:30.962531  244134 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 18:08:30.962809  244134 exec_runner.go:51] Run: systemctl --version
	I0920 18:08:30.965429  244134 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:08:30.976771  244134 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0920 18:08:30.977824  244134 start.go:297] selected driver: none
	I0920 18:08:30.977834  244134 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.138.0.48 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:08:30.977938  244134 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:08:30.977961  244134 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0920 18:08:30.978246  244134 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0920 18:08:30.980436  244134 out.go:201] 
	W0920 18:08:30.981745  244134 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 18:08:30.983192  244134 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "153.794007ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "47.516081ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "149.962229ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "44.673576ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-2rc4z" [d2b4c444-b7d0-47ba-8639-95c90ee2c0cf] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-2rc4z" [d2b4c444-b7d0-47ba-8639-95c90ee2c0cf] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003412601s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1494: Took "326.87959ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://10.138.0.48:32507
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://10.138.0.48:32507
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-n59jz" [1aebde49-9c73-41a1-9f0f-ba975c6d5040] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-n59jz" [1aebde49-9c73-41a1-9f0f-ba975c6d5040] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003428944s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://10.138.0.48:32526
functional_test.go:1675: http://10.138.0.48:32526: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-n59jz

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.138.0.48:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.138.0.48:32526
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.31s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (21.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2c96bed0-b849-46ed-bba9-8c4cc6141e52] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003433351s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [30b06fba-412f-4b01-a53f-58e21c0d935f] Pending
helpers_test.go:344: "sp-pod" [30b06fba-412f-4b01-a53f-58e21c0d935f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [30b06fba-412f-4b01-a53f-58e21c0d935f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003374341s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [14205ed2-2a65-4f77-8fcb-c0e0959041c7] Pending
helpers_test.go:344: "sp-pod" [14205ed2-2a65-4f77-8fcb-c0e0959041c7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [14205ed2-2a65-4f77-8fcb-c0e0959041c7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003116114s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (21.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 245860: operation not permitted
helpers_test.go:508: unable to kill pid 245809: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context minikube apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [33eb591a-a769-43e0-aa07-7fa39bfa9770] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [33eb591a-a769-43e0-aa07-7fa39bfa9770] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004252153s
I0920 18:09:21.732963  208691 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context minikube get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.32.148 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p minikube tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-v6bqx" [d571a45c-d643-44a6-84c6-a1058fb6a19b] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-v6bqx" [d571a45c-d643-44a6-84c6-a1058fb6a19b] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.003283496s
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-v6bqx -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-v6bqx -- mysql -ppassword -e "show databases;": exit status 1 (114.316655ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 18:09:40.211091  208691 retry.go:31] will retry after 1.035068887s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-v6bqx -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-v6bqx -- mysql -ppassword -e "show databases;": exit status 1 (108.820127ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 18:09:41.355324  208691 retry.go:31] will retry after 1.707745822s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-v6bqx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (14.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (14.01984793s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (14.02s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (14.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (14.567784522s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (14.57s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (13.69s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (13.693571169s)
--- PASS: TestImageBuild/serial/Setup (13.69s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.61s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube: (1.60593128s)
--- PASS: TestImageBuild/serial/NormalBuild (1.61s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.80s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.58s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.58s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.59s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.59s)

                                                
                                    
x
+
TestJSONOutput/start/Command (27.89s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (27.892855086s)
--- PASS: TestJSONOutput/start/Command (27.89s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.5s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.50s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.4s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.40s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.41s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (10.407756811s)
--- PASS: TestJSONOutput/stop/Command (10.41s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (65.719357ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"09aa9361-ef0d-47bf-94f9-44b578a361dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"345d4fdd-31dd-45b6-b50e-24653356bbd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19678"}}
	{"specversion":"1.0","id":"2058574a-867e-4ffa-adee-822741b933b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"595c5ff0-b6da-43ce-87f0-f5de118008e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19678-201891/kubeconfig"}}
	{"specversion":"1.0","id":"8909ed70-145d-41ad-bd94-b856b5e1249b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-201891/.minikube"}}
	{"specversion":"1.0","id":"cdbcd05e-ad2d-49ac-9593-e09b94709530","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9b2b4c22-e3ac-4e15-8dd2-dc1ff6d0b9cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"020e2260-bd8f-4e37-992c-bc3e2630927a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (34.1s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.313610057s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (17.912194636s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.293549613s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (34.10s)

                                                
                                    
x
+
TestPause/serial/Start (27.88s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (27.883927277s)
--- PASS: TestPause/serial/Start (27.88s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.53s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (29.525918791s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.53s)

                                                
                                    
x
+
TestPause/serial/Pause (0.47s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.47s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.13s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (128.687997ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.13s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.37s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.37s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.52s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.52s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.75s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.750602228s)
--- PASS: TestPause/serial/DeletePaused (1.75s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.07s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.07s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (65.64s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1825779124 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1825779124 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (27.199809111s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (34.724266458s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (3.029817453s)
--- PASS: TestRunningBinaryUpgrade (65.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (49.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2999580684 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2999580684 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (14.899321515s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2999580684 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2999580684 -p minikube stop: (23.647628475s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (10.790688502s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (49.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.79s)

                                                
                                    
x
+
TestKubernetesUpgrade (305.56s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (27.655536709s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (1.30373738s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (72.977118ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m16.626273292s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (72.106969ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-201891/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-201891/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (18.517421823s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.25016813s)
--- PASS: TestKubernetesUpgrade (305.56s)

                                                
                                    

Test skip (56/167)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.31.1/preload-exists 0
14 TestDownloadOnly/v1.31.1/cached-images 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Ingress 0
37 TestAddons/parallel/Olm 0
41 TestAddons/parallel/LocalPath 0
45 TestCertOptions 0
47 TestDockerFlags 0
48 TestForceSystemdFlag 0
49 TestForceSystemdEnv 0
50 TestDockerEnvContainerd 0
51 TestKVMDriverInstallOrUpdate 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
54 TestErrorSpam 0
63 TestFunctional/serial/CacheCmd 0
77 TestFunctional/parallel/MountCmd 0
100 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
101 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
102 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
104 TestFunctional/parallel/SSHCmd 0
105 TestFunctional/parallel/CpCmd 0
107 TestFunctional/parallel/FileSync 0
108 TestFunctional/parallel/CertSync 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
116 TestFunctional/parallel/ImageCommands 0
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0
125 TestGvisorAddon 0
126 TestMultiControlPlane 0
134 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
161 TestKicCustomNetwork 0
162 TestKicExistingNetwork 0
163 TestKicCustomSubnet 0
164 TestKicStaticIP 0
167 TestMountStart 0
168 TestMultiNode 0
169 TestNetworkPlugins 0
170 TestNoKubernetes 0
171 TestChangeNoneUser 0
182 TestPreload 0
183 TestScheduledStopWindows 0
184 TestScheduledStopUnix 0
185 TestSkaffold 0
188 TestStartStop/group/old-k8s-version 0.13
189 TestStartStop/group/newest-cni 0.13
190 TestStartStop/group/default-k8s-diff-port 0.13
191 TestStartStop/group/no-preload 0.13
192 TestStartStop/group/disable-driver-mounts 0.13
193 TestStartStop/group/embed-certs 0.13
194 TestInsufficientStorage 0
201 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:194: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:916: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1041: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1717: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1760: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1924: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1955: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:458: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:545: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:292: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2016: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.13s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.13s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard