Test Report: none_Linux 19711

                    
                      f2dddbc2cec1d99a0bb3d71de73f46a47f499a62:2024-09-27:36389
                    
                

Test fail (1/166)

Order failed test Duration
33 TestAddons/parallel/Registry 71.91
x
+
TestAddons/parallel/Registry (71.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.86403ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-5zfg4" [32dd9391-b30e-4231-9d9e-8bd0457919d8] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004008799s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-rbxpj" [ae04301c-b1c9-4a19-af2e-04bc0071e797] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004377266s
addons_test.go:338: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.082712911s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/09/27 00:27:00 [DEBUG] GET http://10.154.0.4:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:41241               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:15 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC | 27 Sep 24 00:15 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 27 Sep 24 00:15 UTC | 27 Sep 24 00:17 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 27 Sep 24 00:17 UTC | 27 Sep 24 00:17 UTC |
	|         | volcano --alsologtostderr -v=1       |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 27 Sep 24 00:27 UTC | 27 Sep 24 00:27 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:15:26
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:15:26.056754  127143 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:15:26.056930  127143 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:15:26.056944  127143 out.go:358] Setting ErrFile to fd 2...
	I0927 00:15:26.056949  127143 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:15:26.057165  127143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-116460/.minikube/bin
	I0927 00:15:26.057802  127143 out.go:352] Setting JSON to false
	I0927 00:15:26.058645  127143 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7064,"bootTime":1727389062,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 00:15:26.058747  127143 start.go:139] virtualization: kvm guest
	I0927 00:15:26.060833  127143 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0927 00:15:26.062248  127143 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19711-116460/.minikube/cache/preloaded-tarball: no such file or directory
	I0927 00:15:26.062283  127143 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:15:26.062297  127143 notify.go:220] Checking for updates...
	I0927 00:15:26.064701  127143 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:15:26.065968  127143 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-116460/kubeconfig
	I0927 00:15:26.067367  127143 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-116460/.minikube
	I0927 00:15:26.068634  127143 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 00:15:26.070226  127143 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:15:26.071773  127143 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:15:26.082582  127143 out.go:177] * Using the none driver based on user configuration
	I0927 00:15:26.083719  127143 start.go:297] selected driver: none
	I0927 00:15:26.083738  127143 start.go:901] validating driver "none" against <nil>
	I0927 00:15:26.083764  127143 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:15:26.083827  127143 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0927 00:15:26.084295  127143 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0927 00:15:26.085103  127143 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0927 00:15:26.085467  127143 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:15:26.085514  127143 cni.go:84] Creating CNI manager for ""
	I0927 00:15:26.085589  127143 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 00:15:26.085607  127143 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0927 00:15:26.085671  127143 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:15:26.086983  127143 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0927 00:15:26.088716  127143 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/config.json ...
	I0927 00:15:26.088767  127143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/config.json: {Name:mk699d4bc5cb4218ce6babe138df72e9f0ac852c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:26.088943  127143 start.go:360] acquireMachinesLock for minikube: {Name:mk0c3282f0caac62dc7b9c8c9c6d629924f62b3c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0927 00:15:26.089001  127143 start.go:364] duration metric: took 23.983µs to acquireMachinesLock for "minikube"
	I0927 00:15:26.089021  127143 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 00:15:26.089108  127143 start.go:125] createHost starting for "" (driver="none")
	I0927 00:15:26.090506  127143 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0927 00:15:26.091525  127143 exec_runner.go:51] Run: systemctl --version
	I0927 00:15:26.094078  127143 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0927 00:15:26.094142  127143 client.go:168] LocalClient.Create starting
	I0927 00:15:26.094257  127143 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-116460/.minikube/certs/ca.pem
	I0927 00:15:26.094302  127143 main.go:141] libmachine: Decoding PEM data...
	I0927 00:15:26.094329  127143 main.go:141] libmachine: Parsing certificate...
	I0927 00:15:26.094408  127143 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19711-116460/.minikube/certs/cert.pem
	I0927 00:15:26.094438  127143 main.go:141] libmachine: Decoding PEM data...
	I0927 00:15:26.094452  127143 main.go:141] libmachine: Parsing certificate...
	I0927 00:15:26.094906  127143 client.go:171] duration metric: took 749.826µs to LocalClient.Create
	I0927 00:15:26.094942  127143 start.go:167] duration metric: took 869.44µs to libmachine.API.Create "minikube"
	I0927 00:15:26.094952  127143 start.go:293] postStartSetup for "minikube" (driver="none")
	I0927 00:15:26.095013  127143 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0927 00:15:26.095096  127143 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0927 00:15:26.104789  127143 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0927 00:15:26.104825  127143 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0927 00:15:26.104840  127143 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0927 00:15:26.106628  127143 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0927 00:15:26.107863  127143 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-116460/.minikube/addons for local assets ...
	I0927 00:15:26.107962  127143 filesync.go:126] Scanning /home/jenkins/minikube-integration/19711-116460/.minikube/files for local assets ...
	I0927 00:15:26.107993  127143 start.go:296] duration metric: took 13.034274ms for postStartSetup
	I0927 00:15:26.108949  127143 profile.go:143] Saving config to /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/config.json ...
	I0927 00:15:26.109169  127143 start.go:128] duration metric: took 20.04494ms to createHost
	I0927 00:15:26.109188  127143 start.go:83] releasing machines lock for "minikube", held for 20.17718ms
	I0927 00:15:26.109677  127143 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0927 00:15:26.109738  127143 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0927 00:15:26.111918  127143 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0927 00:15:26.111963  127143 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0927 00:15:26.120512  127143 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0927 00:15:26.120570  127143 start.go:495] detecting cgroup driver to use...
	I0927 00:15:26.120604  127143 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0927 00:15:26.121149  127143 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 00:15:26.139800  127143 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0927 00:15:26.149822  127143 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0927 00:15:26.159032  127143 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0927 00:15:26.159111  127143 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0927 00:15:26.170218  127143 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 00:15:26.179518  127143 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0927 00:15:26.189860  127143 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0927 00:15:26.200070  127143 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0927 00:15:26.210791  127143 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0927 00:15:26.221057  127143 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0927 00:15:26.232327  127143 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0927 00:15:26.243052  127143 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0927 00:15:26.254136  127143 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0927 00:15:26.263239  127143 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0927 00:15:26.509238  127143 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0927 00:15:26.625304  127143 start.go:495] detecting cgroup driver to use...
	I0927 00:15:26.625374  127143 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0927 00:15:26.625477  127143 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0927 00:15:26.648028  127143 exec_runner.go:51] Run: which cri-dockerd
	I0927 00:15:26.649114  127143 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0927 00:15:26.658441  127143 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0927 00:15:26.658468  127143 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0927 00:15:26.658507  127143 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0927 00:15:26.667029  127143 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0927 00:15:26.667230  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3453152198 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0927 00:15:26.677071  127143 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0927 00:15:26.908822  127143 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0927 00:15:27.139677  127143 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0927 00:15:27.139814  127143 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0927 00:15:27.139830  127143 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0927 00:15:27.139866  127143 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0927 00:15:27.148856  127143 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0927 00:15:27.149107  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1647215183 /etc/docker/daemon.json
	I0927 00:15:27.158297  127143 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0927 00:15:27.363636  127143 exec_runner.go:51] Run: sudo systemctl restart docker
	I0927 00:15:27.774460  127143 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0927 00:15:27.786737  127143 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0927 00:15:27.803946  127143 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0927 00:15:27.815843  127143 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0927 00:15:28.032042  127143 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0927 00:15:28.241421  127143 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0927 00:15:28.461380  127143 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0927 00:15:28.476952  127143 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0927 00:15:28.488054  127143 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0927 00:15:28.700508  127143 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0927 00:15:28.772855  127143 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0927 00:15:28.772945  127143 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0927 00:15:28.774419  127143 start.go:563] Will wait 60s for crictl version
	I0927 00:15:28.774467  127143 exec_runner.go:51] Run: which crictl
	I0927 00:15:28.775334  127143 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0927 00:15:28.806819  127143 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0927 00:15:28.806901  127143 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0927 00:15:28.830163  127143 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0927 00:15:28.854192  127143 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0927 00:15:28.854304  127143 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0927 00:15:28.857252  127143 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0927 00:15:28.858663  127143 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0927 00:15:28.858793  127143 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0927 00:15:28.858806  127143 kubeadm.go:934] updating node { 10.154.0.4 8443 v1.31.1 docker true true} ...
	I0927 00:15:28.858916  127143 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-9 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.154.0.4 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0927 00:15:28.858977  127143 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0927 00:15:28.911738  127143 cni.go:84] Creating CNI manager for ""
	I0927 00:15:28.911764  127143 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 00:15:28.911777  127143 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0927 00:15:28.911807  127143 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.154.0.4 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-9 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.154.0.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.154.0.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0927 00:15:28.912002  127143 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.154.0.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-9"
	  kubeletExtraArgs:
	    node-ip: 10.154.0.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.154.0.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0927 00:15:28.912086  127143 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0927 00:15:28.920769  127143 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0927 00:15:28.920840  127143 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0927 00:15:28.928785  127143 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0927 00:15:28.928805  127143 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0927 00:15:28.928849  127143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-116460/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0927 00:15:28.928850  127143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-116460/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0927 00:15:28.928805  127143 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0927 00:15:28.929026  127143 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:15:28.941335  127143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-116460/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0927 00:15:28.979871  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3204681781 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0927 00:15:28.980059  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2218151488 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0927 00:15:29.025178  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1751216618 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0927 00:15:29.095573  127143 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0927 00:15:29.104241  127143 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0927 00:15:29.104266  127143 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0927 00:15:29.104313  127143 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0927 00:15:29.112746  127143 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I0927 00:15:29.112943  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2724996801 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0927 00:15:29.122116  127143 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0927 00:15:29.122149  127143 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0927 00:15:29.122208  127143 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0927 00:15:29.130497  127143 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0927 00:15:29.130661  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1833491100 /lib/systemd/system/kubelet.service
	I0927 00:15:29.140449  127143 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0927 00:15:29.140577  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3226971426 /var/tmp/minikube/kubeadm.yaml.new
	I0927 00:15:29.149392  127143 exec_runner.go:51] Run: grep 10.154.0.4	control-plane.minikube.internal$ /etc/hosts
	I0927 00:15:29.150749  127143 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0927 00:15:29.377259  127143 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0927 00:15:29.391747  127143 certs.go:68] Setting up /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube for IP: 10.154.0.4
	I0927 00:15:29.391775  127143 certs.go:194] generating shared ca certs ...
	I0927 00:15:29.391817  127143 certs.go:226] acquiring lock for ca certs: {Name:mk756c5fab023c128c8a1ee40b210d4906fcf7ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:29.391976  127143 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19711-116460/.minikube/ca.key
	I0927 00:15:29.392023  127143 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19711-116460/.minikube/proxy-client-ca.key
	I0927 00:15:29.392037  127143 certs.go:256] generating profile certs ...
	I0927 00:15:29.392113  127143 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/client.key
	I0927 00:15:29.392132  127143 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/client.crt with IP's: []
	I0927 00:15:29.511224  127143 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/client.crt ...
	I0927 00:15:29.511259  127143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/client.crt: {Name:mk523f5ef9545f5657d8fdc08dc03deac1b0df8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:29.511434  127143 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/client.key ...
	I0927 00:15:29.511451  127143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/client.key: {Name:mkc69db3956d3a764b405bfb9fb4610e0667c104 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:29.511544  127143 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/apiserver.key.1b9420d6
	I0927 00:15:29.511560  127143 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/apiserver.crt.1b9420d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.154.0.4]
	I0927 00:15:29.694902  127143 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/apiserver.crt.1b9420d6 ...
	I0927 00:15:29.694938  127143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/apiserver.crt.1b9420d6: {Name:mk7addcd2141e6f37fc5edcf6970dd4475e3537a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:29.695128  127143 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/apiserver.key.1b9420d6 ...
	I0927 00:15:29.695158  127143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/apiserver.key.1b9420d6: {Name:mk73cbc0fb0bc62c3a7760cbeaa9d0ff4b5b0b48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:29.695256  127143 certs.go:381] copying /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/apiserver.crt.1b9420d6 -> /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/apiserver.crt
	I0927 00:15:29.695379  127143 certs.go:385] copying /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/apiserver.key.1b9420d6 -> /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/apiserver.key
	I0927 00:15:29.695467  127143 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/proxy-client.key
	I0927 00:15:29.695488  127143 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0927 00:15:29.856263  127143 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/proxy-client.crt ...
	I0927 00:15:29.856304  127143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/proxy-client.crt: {Name:mkef23134c28d00bad2b1e8ae2ef253b7d3a6849 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:29.856488  127143 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/proxy-client.key ...
	I0927 00:15:29.856506  127143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/proxy-client.key: {Name:mk4fa663cf49f2e04c5a9b15417dfaa8afeced43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:29.856709  127143 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-116460/.minikube/certs/ca-key.pem (1675 bytes)
	I0927 00:15:29.856760  127143 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-116460/.minikube/certs/ca.pem (1082 bytes)
	I0927 00:15:29.856792  127143 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-116460/.minikube/certs/cert.pem (1123 bytes)
	I0927 00:15:29.856831  127143 certs.go:484] found cert: /home/jenkins/minikube-integration/19711-116460/.minikube/certs/key.pem (1679 bytes)
	I0927 00:15:29.857548  127143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-116460/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0927 00:15:29.857706  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2296640606 /var/lib/minikube/certs/ca.crt
	I0927 00:15:29.866992  127143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-116460/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0927 00:15:29.867179  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube146355848 /var/lib/minikube/certs/ca.key
	I0927 00:15:29.875374  127143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-116460/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0927 00:15:29.875522  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3179250502 /var/lib/minikube/certs/proxy-client-ca.crt
	I0927 00:15:29.884851  127143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-116460/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0927 00:15:29.885106  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3654155351 /var/lib/minikube/certs/proxy-client-ca.key
	I0927 00:15:29.892975  127143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0927 00:15:29.893198  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube313591325 /var/lib/minikube/certs/apiserver.crt
	I0927 00:15:29.901665  127143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0927 00:15:29.901886  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube922025066 /var/lib/minikube/certs/apiserver.key
	I0927 00:15:29.910010  127143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0927 00:15:29.910195  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube577114657 /var/lib/minikube/certs/proxy-client.crt
	I0927 00:15:29.918339  127143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-116460/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0927 00:15:29.918470  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3872470726 /var/lib/minikube/certs/proxy-client.key
	I0927 00:15:29.926778  127143 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0927 00:15:29.926801  127143 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:15:29.926853  127143 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:15:29.934329  127143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19711-116460/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0927 00:15:29.934488  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2488842704 /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:15:29.942519  127143 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0927 00:15:29.942639  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1659676463 /var/lib/minikube/kubeconfig
	I0927 00:15:29.950827  127143 exec_runner.go:51] Run: openssl version
	I0927 00:15:29.953647  127143 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0927 00:15:29.962278  127143 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:15:29.963561  127143 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 27 00:15 /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:15:29.963625  127143 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0927 00:15:29.966492  127143 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0927 00:15:29.974604  127143 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0927 00:15:29.975772  127143 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0927 00:15:29.975807  127143 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:15:29.975907  127143 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0927 00:15:29.990895  127143 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0927 00:15:30.000918  127143 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0927 00:15:30.009556  127143 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0927 00:15:30.030860  127143 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0927 00:15:30.040703  127143 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0927 00:15:30.040730  127143 kubeadm.go:157] found existing configuration files:
	
	I0927 00:15:30.040783  127143 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0927 00:15:30.048956  127143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0927 00:15:30.049038  127143 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0927 00:15:30.057984  127143 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0927 00:15:30.066136  127143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0927 00:15:30.066197  127143 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0927 00:15:30.074289  127143 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0927 00:15:30.082936  127143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0927 00:15:30.083006  127143 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0927 00:15:30.091783  127143 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0927 00:15:30.100647  127143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0927 00:15:30.100710  127143 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0927 00:15:30.108881  127143 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0927 00:15:30.150559  127143 kubeadm.go:310] W0927 00:15:30.150438  128015 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:15:30.151040  127143 kubeadm.go:310] W0927 00:15:30.150996  128015 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0927 00:15:30.152679  127143 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0927 00:15:30.152724  127143 kubeadm.go:310] [preflight] Running pre-flight checks
	I0927 00:15:30.252455  127143 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0927 00:15:30.252565  127143 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0927 00:15:30.252578  127143 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0927 00:15:30.252585  127143 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0927 00:15:30.264679  127143 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0927 00:15:30.267119  127143 out.go:235]   - Generating certificates and keys ...
	I0927 00:15:30.267168  127143 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0927 00:15:30.267183  127143 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0927 00:15:30.322639  127143 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0927 00:15:30.489694  127143 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0927 00:15:30.653716  127143 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0927 00:15:30.766107  127143 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0927 00:15:30.920743  127143 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0927 00:15:30.920813  127143 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-9] and IPs [10.154.0.4 127.0.0.1 ::1]
	I0927 00:15:30.978828  127143 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0927 00:15:30.978868  127143 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-9] and IPs [10.154.0.4 127.0.0.1 ::1]
	I0927 00:15:31.156136  127143 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0927 00:15:31.338767  127143 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0927 00:15:31.523801  127143 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0927 00:15:31.523974  127143 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0927 00:15:31.633944  127143 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0927 00:15:31.827967  127143 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0927 00:15:31.943031  127143 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0927 00:15:32.007306  127143 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0927 00:15:32.102454  127143 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0927 00:15:32.103003  127143 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0927 00:15:32.105469  127143 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0927 00:15:32.107613  127143 out.go:235]   - Booting up control plane ...
	I0927 00:15:32.107644  127143 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0927 00:15:32.107663  127143 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0927 00:15:32.108360  127143 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0927 00:15:32.129296  127143 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0927 00:15:32.134267  127143 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0927 00:15:32.134327  127143 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0927 00:15:32.357211  127143 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0927 00:15:32.357242  127143 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0927 00:15:32.858795  127143 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.564194ms
	I0927 00:15:32.858825  127143 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0927 00:15:36.860407  127143 kubeadm.go:310] [api-check] The API server is healthy after 4.001590682s
	I0927 00:15:36.873589  127143 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0927 00:15:36.885294  127143 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0927 00:15:36.906023  127143 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0927 00:15:36.906069  127143 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-9 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0927 00:15:36.914328  127143 kubeadm.go:310] [bootstrap-token] Using token: cb9aai.7468zzz9nketn421
	I0927 00:15:36.915888  127143 out.go:235]   - Configuring RBAC rules ...
	I0927 00:15:36.915923  127143 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0927 00:15:36.920195  127143 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0927 00:15:36.928140  127143 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0927 00:15:36.930921  127143 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0927 00:15:36.934654  127143 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0927 00:15:36.937445  127143 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0927 00:15:37.268914  127143 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0927 00:15:37.700291  127143 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0927 00:15:38.268146  127143 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0927 00:15:38.268934  127143 kubeadm.go:310] 
	I0927 00:15:38.268956  127143 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0927 00:15:38.268961  127143 kubeadm.go:310] 
	I0927 00:15:38.268967  127143 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0927 00:15:38.268971  127143 kubeadm.go:310] 
	I0927 00:15:38.268976  127143 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0927 00:15:38.268980  127143 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0927 00:15:38.269001  127143 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0927 00:15:38.269005  127143 kubeadm.go:310] 
	I0927 00:15:38.269017  127143 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0927 00:15:38.269021  127143 kubeadm.go:310] 
	I0927 00:15:38.269025  127143 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0927 00:15:38.269028  127143 kubeadm.go:310] 
	I0927 00:15:38.269031  127143 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0927 00:15:38.269034  127143 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0927 00:15:38.269037  127143 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0927 00:15:38.269039  127143 kubeadm.go:310] 
	I0927 00:15:38.269043  127143 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0927 00:15:38.269046  127143 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0927 00:15:38.269049  127143 kubeadm.go:310] 
	I0927 00:15:38.269051  127143 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token cb9aai.7468zzz9nketn421 \
	I0927 00:15:38.269055  127143 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1d141527aaf1d2c0fb7b0adcde69f8a0a613dff7bc5dc95cc5153131c10474d3 \
	I0927 00:15:38.269057  127143 kubeadm.go:310] 	--control-plane 
	I0927 00:15:38.269060  127143 kubeadm.go:310] 
	I0927 00:15:38.269063  127143 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0927 00:15:38.269065  127143 kubeadm.go:310] 
	I0927 00:15:38.269068  127143 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token cb9aai.7468zzz9nketn421 \
	I0927 00:15:38.269071  127143 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1d141527aaf1d2c0fb7b0adcde69f8a0a613dff7bc5dc95cc5153131c10474d3 
	I0927 00:15:38.272027  127143 cni.go:84] Creating CNI manager for ""
	I0927 00:15:38.272064  127143 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0927 00:15:38.273916  127143 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0927 00:15:38.275288  127143 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0927 00:15:38.286171  127143 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0927 00:15:38.286334  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3665517556 /etc/cni/net.d/1-k8s.conflist
	I0927 00:15:38.297571  127143 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0927 00:15:38.297633  127143 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:38.297659  127143 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-9 minikube.k8s.io/updated_at=2024_09_27T00_15_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0927 00:15:38.306779  127143 ops.go:34] apiserver oom_adj: -16
	I0927 00:15:38.366706  127143 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:38.866850  127143 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:39.367329  127143 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:39.867701  127143 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:40.367139  127143 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:40.867268  127143 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:41.367194  127143 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:41.867194  127143 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:42.366845  127143 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0927 00:15:42.437193  127143 kubeadm.go:1113] duration metric: took 4.139618607s to wait for elevateKubeSystemPrivileges
	I0927 00:15:42.437231  127143 kubeadm.go:394] duration metric: took 12.461428273s to StartCluster
	I0927 00:15:42.437253  127143 settings.go:142] acquiring lock: {Name:mk21aca334d9a656fcd6241902ed89386883726b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:42.437330  127143 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19711-116460/kubeconfig
	I0927 00:15:42.438086  127143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19711-116460/kubeconfig: {Name:mk005d44ca8be515ae45a481c5d822e83fc3b66b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0927 00:15:42.438424  127143 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0927 00:15:42.438636  127143 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:15:42.438568  127143 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0927 00:15:42.438702  127143 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0927 00:15:42.438719  127143 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0927 00:15:42.438723  127143 addons.go:69] Setting yakd=true in profile "minikube"
	I0927 00:15:42.438742  127143 addons.go:234] Setting addon yakd=true in "minikube"
	I0927 00:15:42.438750  127143 host.go:66] Checking if "minikube" exists ...
	I0927 00:15:42.438778  127143 host.go:66] Checking if "minikube" exists ...
	I0927 00:15:42.438774  127143 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0927 00:15:42.438804  127143 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0927 00:15:42.438798  127143 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0927 00:15:42.438818  127143 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0927 00:15:42.438832  127143 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0927 00:15:42.438846  127143 mustload.go:65] Loading cluster: minikube
	I0927 00:15:42.438849  127143 host.go:66] Checking if "minikube" exists ...
	I0927 00:15:42.439111  127143 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:15:42.439442  127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0927 00:15:42.439459  127143 api_server.go:166] Checking apiserver status ...
	I0927 00:15:42.439494  127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0927 00:15:42.439498  127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0927 00:15:42.439508  127143 api_server.go:166] Checking apiserver status ...
	I0927 00:15:42.439511  127143 api_server.go:166] Checking apiserver status ...
	I0927 00:15:42.439511  127143 addons.go:69] Setting registry=true in profile "minikube"
	I0927 00:15:42.439526  127143 addons.go:234] Setting addon registry=true in "minikube"
	I0927 00:15:42.439535  127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:15:42.439538  127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:15:42.439543  127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0927 00:15:42.439553  127143 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0927 00:15:42.439558  127143 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0927 00:15:42.439545  127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0927 00:15:42.439570  127143 api_server.go:166] Checking apiserver status ...
	I0927 00:15:42.439571  127143 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0927 00:15:42.439593  127143 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0927 00:15:42.439602  127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:15:42.439614  127143 host.go:66] Checking if "minikube" exists ...
	I0927 00:15:42.439648  127143 host.go:66] Checking if "minikube" exists ...
	I0927 00:15:42.439792  127143 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0927 00:15:42.439818  127143 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0927 00:15:42.439860  127143 host.go:66] Checking if "minikube" exists ...
	I0927 00:15:42.439991  127143 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0927 00:15:42.440023  127143 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0927 00:15:42.440237  127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0927 00:15:42.440253  127143 api_server.go:166] Checking apiserver status ...
	I0927 00:15:42.440296  127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0927 00:15:42.439560  127143 api_server.go:166] Checking apiserver status ...
	I0927 00:15:42.440307  127143 api_server.go:166] Checking apiserver status ...
	I0927 00:15:42.440328  127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:15:42.440339  127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:15:42.440545  127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0927 00:15:42.440558  127143 api_server.go:166] Checking apiserver status ...
	I0927 00:15:42.440590  127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:15:42.440594  127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0927 00:15:42.440609  127143 api_server.go:166] Checking apiserver status ...
	I0927 00:15:42.440639  127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:15:42.440300  127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:15:42.441002  127143 addons.go:69] Setting volcano=true in profile "minikube"
	I0927 00:15:42.441019  127143 addons.go:234] Setting addon volcano=true in "minikube"
	I0927 00:15:42.441046  127143 host.go:66] Checking if "minikube" exists ...
	I0927 00:15:42.441692  127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0927 00:15:42.441715  127143 api_server.go:166] Checking apiserver status ...
	I0927 00:15:42.441745  127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:15:42.442846  127143 out.go:177] * Configuring local host environment ...
	I0927 00:15:42.439549  127143 host.go:66] Checking if "minikube" exists ...
	I0927 00:15:42.443643  127143 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0927 00:15:42.443723  127143 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0927 00:15:42.439499  127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:15:42.443778  127143 host.go:66] Checking if "minikube" exists ...
	I0927 00:15:42.443912  127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0927 00:15:42.443938  127143 api_server.go:166] Checking apiserver status ...
	I0927 00:15:42.443975  127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:15:42.443795  127143 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0927 00:15:42.444264  127143 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0927 00:15:42.444294  127143 host.go:66] Checking if "minikube" exists ...
	W0927 00:15:42.446486  127143 out.go:270] * 
	W0927 00:15:42.446508  127143 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0927 00:15:42.446517  127143 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0927 00:15:42.446525  127143 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0927 00:15:42.446532  127143 out.go:270] * 
	W0927 00:15:42.446582  127143 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0927 00:15:42.446594  127143 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0927 00:15:42.446601  127143 out.go:270] * 
	W0927 00:15:42.446629  127143 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0927 00:15:42.446639  127143 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0927 00:15:42.446645  127143 out.go:270] * 
	W0927 00:15:42.446651  127143 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0927 00:15:42.446682  127143 start.go:235] Will wait 6m0s for node &{Name: IP:10.154.0.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0927 00:15:42.450487  127143 out.go:177] * Verifying Kubernetes components...
	I0927 00:15:42.451754  127143 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0927 00:15:42.460334  127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
	I0927 00:15:42.461499  127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
	I0927 00:15:42.463671  127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
	I0927 00:15:42.466089  127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
	I0927 00:15:42.467767  127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
	I0927 00:15:42.468364  127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
	I0927 00:15:42.471321  127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
	I0927 00:15:42.472257  127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
	I0927 00:15:42.472623  127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
	I0927 00:15:42.474011  127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0927 00:15:42.474045  127143 api_server.go:166] Checking apiserver status ...
	I0927 00:15:42.474139  127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:15:42.474806  127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0927 00:15:42.474832  127143 api_server.go:166] Checking apiserver status ...
	I0927 00:15:42.474864  127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:15:42.476217  127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
	I0927 00:15:42.476868  127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
	I0927 00:15:42.487357  127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
	I0927 00:15:42.487505  127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
	I0927 00:15:42.489822  127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
	I0927 00:15:42.489917  127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
	I0927 00:15:42.491704  127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
	I0927 00:15:42.491776  127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
	I0927 00:15:42.493650  127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
	I0927 00:15:42.493721  127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
	I0927 00:15:42.494198  127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
	I0927 00:15:42.494258  127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
	I0927 00:15:42.500622  127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
	I0927 00:15:42.500690  127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
	I0927 00:15:42.500978  127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
	I0927 00:15:42.504801  127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
	I0927 00:15:42.508501  127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
	I0927 00:15:42.508570  127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
	I0927 00:15:42.509098  127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
	I0927 00:15:42.511986  127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
	I0927 00:15:42.513961  127143 api_server.go:204] freezer state: "THAWED"
	I0927 00:15:42.513996  127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0927 00:15:42.514774  127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
	I0927 00:15:42.514836  127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
	I0927 00:15:42.516623  127143 api_server.go:204] freezer state: "THAWED"
	I0927 00:15:42.516651  127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0927 00:15:42.518144  127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
	I0927 00:15:42.518206  127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
	I0927 00:15:42.519344  127143 api_server.go:204] freezer state: "THAWED"
	I0927 00:15:42.519369  127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0927 00:15:42.521209  127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0927 00:15:42.521779  127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0927 00:15:42.522243  127143 host.go:66] Checking if "minikube" exists ...
	I0927 00:15:42.530795  127143 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0927 00:15:42.532279  127143 api_server.go:204] freezer state: "THAWED"
	I0927 00:15:42.532303  127143 api_server.go:204] freezer state: "THAWED"
	I0927 00:15:42.532317  127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0927 00:15:42.532323  127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0927 00:15:42.532694  127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0927 00:15:42.533673  127143 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0927 00:15:42.533737  127143 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0927 00:15:42.534015  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube361156128 /etc/kubernetes/addons/ig-namespace.yaml
	I0927 00:15:42.534295  127143 api_server.go:204] freezer state: "THAWED"
	I0927 00:15:42.534313  127143 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0927 00:15:42.534317  127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0927 00:15:42.536355  127143 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0927 00:15:42.536388  127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0927 00:15:42.536516  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2060666848 /etc/kubernetes/addons/deployment.yaml
	I0927 00:15:42.545482  127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0927 00:15:42.545578  127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
	I0927 00:15:42.545636  127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
	I0927 00:15:42.545705  127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0927 00:15:42.546501  127143 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0927 00:15:42.546543  127143 host.go:66] Checking if "minikube" exists ...
	I0927 00:15:42.547230  127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0927 00:15:42.547247  127143 api_server.go:166] Checking apiserver status ...
	I0927 00:15:42.547284  127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:15:42.547560  127143 api_server.go:204] freezer state: "THAWED"
	I0927 00:15:42.547588  127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0927 00:15:42.548301  127143 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0927 00:15:42.548916  127143 api_server.go:204] freezer state: "THAWED"
	I0927 00:15:42.548929  127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0927 00:15:42.549408  127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0927 00:15:42.549574  127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
	I0927 00:15:42.549620  127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
	I0927 00:15:42.550870  127143 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0927 00:15:42.550966  127143 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0927 00:15:42.550992  127143 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0927 00:15:42.551148  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube74715924 /etc/kubernetes/addons/yakd-ns.yaml
	I0927 00:15:42.554651  127143 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0927 00:15:42.555527  127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0927 00:15:42.555656  127143 api_server.go:204] freezer state: "THAWED"
	I0927 00:15:42.556026  127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0927 00:15:42.557448  127143 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0927 00:15:42.557552  127143 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0927 00:15:42.557678  127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0927 00:15:42.558966  127143 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0927 00:15:42.559016  127143 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0927 00:15:42.559142  127143 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:15:42.559159  127143 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0927 00:15:42.559167  127143 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:15:42.559206  127143 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:15:42.560451  127143 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 00:15:42.560481  127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0927 00:15:42.560514  127143 api_server.go:204] freezer state: "THAWED"
	I0927 00:15:42.560533  127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0927 00:15:42.560621  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2257949060 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 00:15:42.562656  127143 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0927 00:15:42.562848  127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0927 00:15:42.563778  127143 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0927 00:15:42.563842  127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
	I0927 00:15:42.563892  127143 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0927 00:15:42.563897  127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
	I0927 00:15:42.564439  127143 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0927 00:15:42.564471  127143 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0927 00:15:42.564634  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1616076761 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0927 00:15:42.565433  127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0927 00:15:42.567075  127143 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0927 00:15:42.567170  127143 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0927 00:15:42.567277  127143 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0927 00:15:42.567337  127143 host.go:66] Checking if "minikube" exists ...
	I0927 00:15:42.568217  127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0927 00:15:42.568269  127143 api_server.go:166] Checking apiserver status ...
	I0927 00:15:42.568460  127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:15:42.568510  127143 api_server.go:204] freezer state: "THAWED"
	I0927 00:15:42.568531  127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0927 00:15:42.572947  127143 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0927 00:15:42.573737  127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0927 00:15:42.574224  127143 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0927 00:15:42.574595  127143 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0927 00:15:42.574636  127143 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0927 00:15:42.574789  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube187722530 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0927 00:15:42.575355  127143 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0927 00:15:42.576516  127143 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0927 00:15:42.576557  127143 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0927 00:15:42.576702  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4100290210 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0927 00:15:42.577278  127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
	I0927 00:15:42.578424  127143 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0927 00:15:42.578511  127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I0927 00:15:42.579112  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube241673189 /etc/kubernetes/addons/volcano-deployment.yaml
	I0927 00:15:42.581851  127143 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0927 00:15:42.581889  127143 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0927 00:15:42.582021  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3790915627 /etc/kubernetes/addons/yakd-sa.yaml
	I0927 00:15:42.582771  127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0927 00:15:42.589366  127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0927 00:15:42.589548  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2461086298 /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:15:42.589796  127143 api_server.go:204] freezer state: "THAWED"
	I0927 00:15:42.589819  127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0927 00:15:42.590103  127143 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0927 00:15:42.590158  127143 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0927 00:15:42.590284  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3455873887 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0927 00:15:42.590995  127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0927 00:15:42.593580  127143 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0927 00:15:42.593611  127143 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0927 00:15:42.593725  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube919025420 /etc/kubernetes/addons/yakd-crb.yaml
	I0927 00:15:42.595291  127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0927 00:15:42.597430  127143 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I0927 00:15:42.599001  127143 out.go:177]   - Using image docker.io/registry:2.8.3
	I0927 00:15:42.600628  127143 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0927 00:15:42.600664  127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0927 00:15:42.600839  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube657484236 /etc/kubernetes/addons/registry-rc.yaml
	I0927 00:15:42.601106  127143 api_server.go:204] freezer state: "THAWED"
	I0927 00:15:42.601135  127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0927 00:15:42.606793  127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0927 00:15:42.606948  127143 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0927 00:15:42.606977  127143 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0927 00:15:42.607100  127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
	I0927 00:15:42.607114  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2137832905 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0927 00:15:42.608745  127143 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0927 00:15:42.610422  127143 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 00:15:42.610463  127143 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0927 00:15:42.611427  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1625268335 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0927 00:15:42.611495  127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
	I0927 00:15:42.611552  127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
	I0927 00:15:42.614470  127143 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0927 00:15:42.614506  127143 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0927 00:15:42.614649  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1013574020 /etc/kubernetes/addons/ig-role.yaml
	I0927 00:15:42.615167  127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0927 00:15:42.619911  127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0927 00:15:42.622799  127143 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0927 00:15:42.622831  127143 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0927 00:15:42.623090  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube734472719 /etc/kubernetes/addons/yakd-svc.yaml
	I0927 00:15:42.628327  127143 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0927 00:15:42.628372  127143 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0927 00:15:42.628594  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube234760308 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0927 00:15:42.631446  127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
	I0927 00:15:42.631510  127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
	I0927 00:15:42.634914  127143 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0927 00:15:42.638481  127143 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0927 00:15:42.638527  127143 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 00:15:42.638531  127143 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0927 00:15:42.638615  127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0927 00:15:42.638844  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3251734669 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0927 00:15:42.641622  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1622212682 /etc/kubernetes/addons/registry-svc.yaml
	I0927 00:15:42.644895  127143 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0927 00:15:42.644934  127143 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0927 00:15:42.645198  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube284046721 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0927 00:15:42.647339  127143 api_server.go:204] freezer state: "THAWED"
	I0927 00:15:42.647366  127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0927 00:15:42.648574  127143 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0927 00:15:42.648608  127143 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0927 00:15:42.648833  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3380755994 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0927 00:15:42.652117  127143 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0927 00:15:42.652149  127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0927 00:15:42.652291  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1755796444 /etc/kubernetes/addons/yakd-dp.yaml
	I0927 00:15:42.661158  127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0927 00:15:42.661218  127143 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0927 00:15:42.661236  127143 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0927 00:15:42.661244  127143 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0927 00:15:42.661286  127143 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0927 00:15:42.667316  127143 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0927 00:15:42.667353  127143 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0927 00:15:42.667525  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3659750276 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0927 00:15:42.668724  127143 api_server.go:204] freezer state: "THAWED"
	I0927 00:15:42.668749  127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0927 00:15:42.680009  127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0927 00:15:42.682051  127143 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0927 00:15:42.683702  127143 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0927 00:15:42.683739  127143 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0927 00:15:42.683879  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3605290430 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0927 00:15:42.684902  127143 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0927 00:15:42.685298  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube931078911 /etc/kubernetes/addons/storageclass.yaml
	I0927 00:15:42.689730  127143 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 00:15:42.689767  127143 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0927 00:15:42.690021  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3067791732 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0927 00:15:42.691621  127143 out.go:177]   - Using image docker.io/busybox:stable
	I0927 00:15:42.703220  127143 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 00:15:42.703378  127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0927 00:15:42.703784  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2302943204 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 00:15:42.714098  127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0927 00:15:42.714366  127143 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0927 00:15:42.714389  127143 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0927 00:15:42.714523  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3912720131 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0927 00:15:42.715061  127143 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0927 00:15:42.715088  127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0927 00:15:42.715206  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube491324605 /etc/kubernetes/addons/registry-proxy.yaml
	I0927 00:15:42.719732  127143 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 00:15:42.719773  127143 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0927 00:15:42.719914  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2321845525 /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 00:15:42.727124  127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0927 00:15:42.729285  127143 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0927 00:15:42.729327  127143 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0927 00:15:42.730144  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2687041217 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0927 00:15:42.753650  127143 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0927 00:15:42.753688  127143 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0927 00:15:42.754524  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube776664454 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0927 00:15:42.754130  127143 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0927 00:15:42.754705  127143 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0927 00:15:42.754858  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube28539729 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0927 00:15:42.755797  127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0927 00:15:42.757907  127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0927 00:15:42.757942  127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0927 00:15:42.774276  127143 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0927 00:15:42.774320  127143 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0927 00:15:42.774488  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3808091519 /etc/kubernetes/addons/ig-crd.yaml
	I0927 00:15:42.775819  127143 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:15:42.775852  127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0927 00:15:42.775986  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3737515673 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:15:42.798259  127143 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0927 00:15:42.798306  127143 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0927 00:15:42.798450  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1723079502 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0927 00:15:42.824046  127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:15:42.852092  127143 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0927 00:15:42.863846  127143 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 00:15:42.863891  127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0927 00:15:42.864058  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1905860265 /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 00:15:42.892477  127143 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0927 00:15:42.892844  127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0927 00:15:42.893065  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2552489540 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0927 00:15:42.898622  127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0927 00:15:42.937933  127143 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0927 00:15:42.937979  127143 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0927 00:15:42.938136  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2797859149 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0927 00:15:42.942211  127143 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-9" to be "Ready" ...
	I0927 00:15:42.945746  127143 node_ready.go:49] node "ubuntu-20-agent-9" has status "Ready":"True"
	I0927 00:15:42.945774  127143 node_ready.go:38] duration metric: took 3.52972ms for node "ubuntu-20-agent-9" to be "Ready" ...
	I0927 00:15:42.945786  127143 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:15:42.955399  127143 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:42.993568  127143 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0927 00:15:42.993624  127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0927 00:15:42.993806  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1621205174 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0927 00:15:43.011347  127143 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0927 00:15:43.059524  127143 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0927 00:15:43.059565  127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0927 00:15:43.059716  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2777152784 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0927 00:15:43.158237  127143 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 00:15:43.158290  127143 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0927 00:15:43.158443  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2913326724 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 00:15:43.281030  127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0927 00:15:43.515014  127143 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0927 00:15:43.601104  127143 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0927 00:15:43.833268  127143 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.075278624s)
	I0927 00:15:43.833308  127143 addons.go:475] Verifying addon registry=true in "minikube"
	I0927 00:15:43.837520  127143 out.go:177] * Verifying registry addon...
	I0927 00:15:43.841404  127143 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0927 00:15:43.852143  127143 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0927 00:15:43.852169  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:43.852498  127143 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.096655141s)
	I0927 00:15:43.852528  127143 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0927 00:15:43.917505  127143 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.018807483s)
	I0927 00:15:43.985146  127143 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.257781693s)
	I0927 00:15:44.348048  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:44.645487  127143 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.821377634s)
	W0927 00:15:44.645534  127143 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 00:15:44.645561  127143 retry.go:31] will retry after 128.327586ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0927 00:15:44.777214  127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0927 00:15:44.846675  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:44.966858  127143 pod_ready.go:103] pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"False"
	I0927 00:15:45.352597  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:45.637949  127143 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.022726323s)
	I0927 00:15:45.811423  127143 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.530320851s)
	I0927 00:15:45.811465  127143 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0927 00:15:45.819721  127143 out.go:177] * Verifying csi-hostpath-driver addon...
	I0927 00:15:45.822245  127143 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0927 00:15:45.827421  127143 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0927 00:15:45.827450  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:45.846299  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:45.963134  127143 pod_ready.go:93] pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
	I0927 00:15:45.963170  127143 pod_ready.go:82] duration metric: took 3.007652986s for pod "etcd-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:45.963184  127143 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:46.327672  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:46.428359  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:46.827601  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:46.845189  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:47.326843  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:47.426377  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:47.627079  127143 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.849808217s)
	I0927 00:15:47.827896  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:47.844918  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:47.968676  127143 pod_ready.go:103] pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"False"
	I0927 00:15:48.328049  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:48.428765  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:48.827658  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:48.845931  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:49.327764  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:49.428298  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:49.539656  127143 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0927 00:15:49.539813  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4150585855 /var/lib/minikube/google_application_credentials.json
	I0927 00:15:49.553393  127143 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0927 00:15:49.553541  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube217750436 /var/lib/minikube/google_cloud_project
	I0927 00:15:49.565547  127143 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0927 00:15:49.565614  127143 host.go:66] Checking if "minikube" exists ...
	I0927 00:15:49.566424  127143 kubeconfig.go:125] found "minikube" server: "https://10.154.0.4:8443"
	I0927 00:15:49.566454  127143 api_server.go:166] Checking apiserver status ...
	I0927 00:15:49.566499  127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:15:49.586819  127143 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/128421/cgroup
	I0927 00:15:49.598122  127143 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0"
	I0927 00:15:49.598201  127143 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4f9a26d749fe518b62c185a45d96b1d2/9a96c9e4e13e68547776a40ead043e3be57f1e1a85914c09e4a0cd25c08061c0/freezer.state
	I0927 00:15:49.608404  127143 api_server.go:204] freezer state: "THAWED"
	I0927 00:15:49.608444  127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0927 00:15:49.613890  127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0927 00:15:49.613963  127143 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0927 00:15:49.619988  127143 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0927 00:15:49.621698  127143 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0927 00:15:49.622992  127143 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0927 00:15:49.623022  127143 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0927 00:15:49.623141  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2286610540 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0927 00:15:49.634254  127143 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0927 00:15:49.634292  127143 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0927 00:15:49.634418  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3826646702 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0927 00:15:49.650371  127143 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 00:15:49.650407  127143 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0927 00:15:49.650541  127143 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4210990779 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 00:15:49.662264  127143 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0927 00:15:49.826811  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:49.845658  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:49.969808  127143 pod_ready.go:103] pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"False"
	I0927 00:15:50.068528  127143 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0927 00:15:50.071321  127143 out.go:177] * Verifying gcp-auth addon...
	I0927 00:15:50.073946  127143 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0927 00:15:50.076404  127143 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0927 00:15:50.328176  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:50.345413  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:50.470435  127143 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
	I0927 00:15:50.470461  127143 pod_ready.go:82] duration metric: took 4.507268273s for pod "kube-apiserver-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:50.470480  127143 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:50.475866  127143 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
	I0927 00:15:50.475893  127143 pod_ready.go:82] duration metric: took 5.404124ms for pod "kube-controller-manager-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:50.475906  127143 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:50.483762  127143 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-9" in "kube-system" namespace has status "Ready":"True"
	I0927 00:15:50.483788  127143 pod_ready.go:82] duration metric: took 7.872857ms for pod "kube-scheduler-ubuntu-20-agent-9" in "kube-system" namespace to be "Ready" ...
	I0927 00:15:50.483799  127143 pod_ready.go:39] duration metric: took 7.537998935s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0927 00:15:50.483829  127143 api_server.go:52] waiting for apiserver process to appear ...
	I0927 00:15:50.483900  127143 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0927 00:15:50.505484  127143 api_server.go:72] duration metric: took 8.058753379s to wait for apiserver process to appear ...
	I0927 00:15:50.505515  127143 api_server.go:88] waiting for apiserver healthz status ...
	I0927 00:15:50.505540  127143 api_server.go:253] Checking apiserver healthz at https://10.154.0.4:8443/healthz ...
	I0927 00:15:50.510531  127143 api_server.go:279] https://10.154.0.4:8443/healthz returned 200:
	ok
	I0927 00:15:50.511716  127143 api_server.go:141] control plane version: v1.31.1
	I0927 00:15:50.511744  127143 api_server.go:131] duration metric: took 6.223066ms to wait for apiserver health ...
	I0927 00:15:50.511752  127143 system_pods.go:43] waiting for kube-system pods to appear ...
	I0927 00:15:50.520652  127143 system_pods.go:59] 16 kube-system pods found
	I0927 00:15:50.520695  127143 system_pods.go:61] "coredns-7c65d6cfc9-ngvr4" [2c728ce6-d71d-4c64-b8c8-34d355c60149] Running
	I0927 00:15:50.520709  127143 system_pods.go:61] "csi-hostpath-attacher-0" [df77f5ca-6299-459d-aafa-f0969d70ecbb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0927 00:15:50.520718  127143 system_pods.go:61] "csi-hostpath-resizer-0" [df25bef3-80be-42a6-a819-9fa1f1302d97] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0927 00:15:50.520731  127143 system_pods.go:61] "csi-hostpathplugin-9646r" [219d4a80-1ca9-4901-b888-e20a6ee002b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0927 00:15:50.520742  127143 system_pods.go:61] "etcd-ubuntu-20-agent-9" [d1556729-ab60-4fb3-a865-8570ee4621fa] Running
	I0927 00:15:50.520752  127143 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-9" [64869e41-d09d-4b88-b49f-16fa0da814dd] Running
	I0927 00:15:50.520763  127143 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-9" [d234ccc4-c9d8-408f-a89f-b7e0c4f2adaa] Running
	I0927 00:15:50.520771  127143 system_pods.go:61] "kube-proxy-r2kqg" [220c7678-ba9d-42fd-b333-f93c0854dd8f] Running
	I0927 00:15:50.520783  127143 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-9" [5718108b-d751-4c17-85b4-3a54a6e03dae] Running
	I0927 00:15:50.520791  127143 system_pods.go:61] "metrics-server-84c5f94fbc-zb9hk" [8f9b05a4-44d3-4031-a273-6c55abe9fb84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 00:15:50.520797  127143 system_pods.go:61] "nvidia-device-plugin-daemonset-rkscq" [b8bdd9f8-c0ca-4711-bd0d-a07df1e4fded] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0927 00:15:50.520805  127143 system_pods.go:61] "registry-66c9cd494c-5zfg4" [32dd9391-b30e-4231-9d9e-8bd0457919d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0927 00:15:50.520811  127143 system_pods.go:61] "registry-proxy-rbxpj" [ae04301c-b1c9-4a19-af2e-04bc0071e797] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0927 00:15:50.520816  127143 system_pods.go:61] "snapshot-controller-56fcc65765-2wjqz" [f64ff038-32d1-4307-96df-d385f96a0efa] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:15:50.520822  127143 system_pods.go:61] "snapshot-controller-56fcc65765-g8jrp" [5b1ced1b-2ef9-458a-b4eb-74e96136ca34] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:15:50.520825  127143 system_pods.go:61] "storage-provisioner" [15bb45eb-db23-4bae-9e56-982f2031327d] Running
	I0927 00:15:50.520831  127143 system_pods.go:74] duration metric: took 9.072732ms to wait for pod list to return data ...
	I0927 00:15:50.520838  127143 default_sa.go:34] waiting for default service account to be created ...
	I0927 00:15:50.523766  127143 default_sa.go:45] found service account: "default"
	I0927 00:15:50.523796  127143 default_sa.go:55] duration metric: took 2.950985ms for default service account to be created ...
	I0927 00:15:50.523808  127143 system_pods.go:116] waiting for k8s-apps to be running ...
	I0927 00:15:50.532797  127143 system_pods.go:86] 16 kube-system pods found
	I0927 00:15:50.532832  127143 system_pods.go:89] "coredns-7c65d6cfc9-ngvr4" [2c728ce6-d71d-4c64-b8c8-34d355c60149] Running
	I0927 00:15:50.532845  127143 system_pods.go:89] "csi-hostpath-attacher-0" [df77f5ca-6299-459d-aafa-f0969d70ecbb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0927 00:15:50.532854  127143 system_pods.go:89] "csi-hostpath-resizer-0" [df25bef3-80be-42a6-a819-9fa1f1302d97] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0927 00:15:50.532873  127143 system_pods.go:89] "csi-hostpathplugin-9646r" [219d4a80-1ca9-4901-b888-e20a6ee002b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0927 00:15:50.532884  127143 system_pods.go:89] "etcd-ubuntu-20-agent-9" [d1556729-ab60-4fb3-a865-8570ee4621fa] Running
	I0927 00:15:50.532892  127143 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-9" [64869e41-d09d-4b88-b49f-16fa0da814dd] Running
	I0927 00:15:50.532902  127143 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-9" [d234ccc4-c9d8-408f-a89f-b7e0c4f2adaa] Running
	I0927 00:15:50.532909  127143 system_pods.go:89] "kube-proxy-r2kqg" [220c7678-ba9d-42fd-b333-f93c0854dd8f] Running
	I0927 00:15:50.532918  127143 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-9" [5718108b-d751-4c17-85b4-3a54a6e03dae] Running
	I0927 00:15:50.532927  127143 system_pods.go:89] "metrics-server-84c5f94fbc-zb9hk" [8f9b05a4-44d3-4031-a273-6c55abe9fb84] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0927 00:15:50.532939  127143 system_pods.go:89] "nvidia-device-plugin-daemonset-rkscq" [b8bdd9f8-c0ca-4711-bd0d-a07df1e4fded] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0927 00:15:50.532951  127143 system_pods.go:89] "registry-66c9cd494c-5zfg4" [32dd9391-b30e-4231-9d9e-8bd0457919d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0927 00:15:50.532962  127143 system_pods.go:89] "registry-proxy-rbxpj" [ae04301c-b1c9-4a19-af2e-04bc0071e797] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0927 00:15:50.532972  127143 system_pods.go:89] "snapshot-controller-56fcc65765-2wjqz" [f64ff038-32d1-4307-96df-d385f96a0efa] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:15:50.533003  127143 system_pods.go:89] "snapshot-controller-56fcc65765-g8jrp" [5b1ced1b-2ef9-458a-b4eb-74e96136ca34] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0927 00:15:50.533013  127143 system_pods.go:89] "storage-provisioner" [15bb45eb-db23-4bae-9e56-982f2031327d] Running
	I0927 00:15:50.533032  127143 system_pods.go:126] duration metric: took 9.207708ms to wait for k8s-apps to be running ...
	I0927 00:15:50.533044  127143 system_svc.go:44] waiting for kubelet service to be running ....
	I0927 00:15:50.533104  127143 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0927 00:15:50.548603  127143 system_svc.go:56] duration metric: took 15.544485ms WaitForService to wait for kubelet
	I0927 00:15:50.548635  127143 kubeadm.go:582] duration metric: took 8.101914346s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0927 00:15:50.548661  127143 node_conditions.go:102] verifying NodePressure condition ...
	I0927 00:15:50.552115  127143 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0927 00:15:50.552148  127143 node_conditions.go:123] node cpu capacity is 8
	I0927 00:15:50.552162  127143 node_conditions.go:105] duration metric: took 3.495203ms to run NodePressure ...
	I0927 00:15:50.552177  127143 start.go:241] waiting for startup goroutines ...
	I0927 00:15:50.828054  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:50.845416  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:51.327952  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:51.344874  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:51.827796  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:51.845319  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:52.327207  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:52.345385  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:52.829746  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:52.927007  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:53.326848  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:53.344620  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:53.826928  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:53.845195  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:54.326132  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:54.345443  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:54.826978  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:54.845308  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:55.327393  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:55.345255  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:55.826291  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:55.845942  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:56.326997  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:56.345670  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:56.924639  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:56.925497  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:57.327149  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:57.345390  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:57.827488  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:57.846116  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:58.327628  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:58.345906  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:58.827098  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:58.845553  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:59.327540  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:59.564547  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:15:59.826747  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:15:59.844934  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:00.326565  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:00.345209  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:00.827691  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:00.845031  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:01.327982  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:01.345404  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:01.827212  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:01.845855  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:02.327872  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:02.345203  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:02.940267  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:02.941017  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:03.327971  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:03.345195  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:03.827462  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:03.845448  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:04.334793  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:04.344399  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:04.827146  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:04.845669  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:05.327345  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:05.345977  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:05.827534  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:05.846001  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:06.326389  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:06.344789  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:06.827064  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:06.845070  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:07.327059  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:07.345837  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:07.828064  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:07.845064  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:08.326876  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:08.426842  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:08.826565  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:08.845549  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:09.326909  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:09.344931  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:09.826800  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:09.844668  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:10.326171  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:10.346055  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:10.827002  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:10.845490  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:11.327411  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:11.344554  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0927 00:16:11.827124  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:11.846218  127143 kapi.go:107] duration metric: took 28.0048158s to wait for kubernetes.io/minikube-addons=registry ...
	I0927 00:16:12.326521  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:12.827052  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:13.327446  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:13.826734  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:14.326392  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:14.827006  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:15.329829  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:15.826974  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:16.326781  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:16.827948  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:17.327439  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:17.827170  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:18.327204  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:18.826640  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:19.327793  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:19.827366  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:20.326788  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:20.828043  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:21.327075  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:21.828376  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:22.327760  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:22.828130  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:23.326998  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:23.870741  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:24.328222  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:24.827217  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:25.327373  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:25.827429  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:26.327147  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:26.827541  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:27.327878  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:27.827843  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:28.327667  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:28.827865  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0927 00:16:29.326240  127143 kapi.go:107] duration metric: took 43.503994746s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0927 00:16:31.577867  127143 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0927 00:16:31.577892  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:32.078136  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:32.577560  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:33.078252  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:33.577596  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:34.078027  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:34.577189  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:35.077783  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:35.578117  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:36.077726  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:36.578325  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:37.077808  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:37.577442  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:38.077883  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:38.576922  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:39.077555  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:39.577802  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:40.077830  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:40.576835  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:41.077543  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:41.578140  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:42.077791  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:42.577429  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:43.077361  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:43.577289  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:44.078476  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:44.577723  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:45.077896  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:45.577421  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:46.078476  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:46.577496  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:47.077958  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:47.577433  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:48.077570  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:48.577579  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:49.078175  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:49.577902  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:50.077368  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:50.577391  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:51.078013  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:51.577459  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:52.077490  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:52.577863  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:53.077951  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:53.578272  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:54.077450  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:54.577573  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:55.077667  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:55.577806  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:56.079165  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:56.578243  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:57.077713  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:57.578555  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:58.078106  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:58.577224  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:59.077647  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:16:59.578198  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:00.077307  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:00.577323  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:01.077475  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:01.577859  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:02.077384  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:02.577178  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:03.077237  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:03.577919  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:04.077221  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:04.577225  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:05.077116  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:05.577735  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:06.077702  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:06.578549  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:07.077997  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:07.577518  127143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0927 00:17:08.077848  127143 kapi.go:107] duration metric: took 1m18.003900162s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0927 00:17:08.079811  127143 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0927 00:17:08.081174  127143 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0927 00:17:08.082714  127143 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0927 00:17:08.084257  127143 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, storage-provisioner, yakd, metrics-server, inspektor-gadget, storage-provisioner-rancher, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0927 00:17:08.085666  127143 addons.go:510] duration metric: took 1m25.647244924s for enable addons: enabled=[nvidia-device-plugin cloud-spanner default-storageclass storage-provisioner yakd metrics-server inspektor-gadget storage-provisioner-rancher volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0927 00:17:08.085733  127143 start.go:246] waiting for cluster config update ...
	I0927 00:17:08.085757  127143 start.go:255] writing updated cluster config ...
	I0927 00:17:08.086173  127143 exec_runner.go:51] Run: rm -f paused
	I0927 00:17:08.131608  127143 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0927 00:17:08.133847  127143 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Fri 2024-09-20 09:35:01 UTC, end at Fri 2024-09-27 00:27:01 UTC. --
	Sep 27 00:22:19 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:22:19.260902721Z" level=error msg="Error running exec 8f6fce7b85494bb989da6ef75ed7d23504a21c1837b6f78f345439ca76592043 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" spanID=73d2257ada5f6b41 traceID=f4c71e2cd8f5468baa3f08c78ff913d9
	Sep 27 00:22:19 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:22:19.261021424Z" level=error msg="Error running exec f9b42e182c3f544f16b8e520f7ff27d4a1db9bf46c92d3c73d09d0cc026bd398 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" spanID=fe17ccaa7ed484a2 traceID=d75534bef09f645d796fea077a17d190
	Sep 27 00:22:19 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:22:19.427035575Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 27 00:22:19 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:22:19.427035610Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 27 00:22:19 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:22:19.427765319Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 27 00:22:19 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:22:19.427788353Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 27 00:22:19 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:22:19.429080063Z" level=error msg="Error running exec efa3d5401188a5731dd13bf470f608dee1855545736d049c7fe4a9252671173e in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" spanID=c55f759e9e7de4df traceID=f0d584bec7844ce0f9da22fcb525d258
	Sep 27 00:22:19 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:22:19.429784326Z" level=error msg="Error running exec f1e7cfac7879ae1f805c3317fdffb8236f0e017d42fde8406905039c3a6f3ac0 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" spanID=33c3e99e37cfad30 traceID=899a79309f45bf2f14582dc77e10915d
	Sep 27 00:22:19 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:22:19.470407684Z" level=info msg="ignoring event" container=ce0f95495465f6082ad910b695a398fd1abb55f85605cdea136b77cefb462fe2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:22:28 ubuntu-20-agent-9 cri-dockerd[127689]: time="2024-09-27T00:22:28Z" level=error msg="error getting RW layer size for container ID 'dca01d29e59d31c5cec505ae4fc9af9fadda55955c5e7343d6d3dd6a8bafd167': Error response from daemon: No such container: dca01d29e59d31c5cec505ae4fc9af9fadda55955c5e7343d6d3dd6a8bafd167"
	Sep 27 00:22:28 ubuntu-20-agent-9 cri-dockerd[127689]: time="2024-09-27T00:22:28Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'dca01d29e59d31c5cec505ae4fc9af9fadda55955c5e7343d6d3dd6a8bafd167'"
	Sep 27 00:23:39 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:23:39.942728851Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=dc9e9c8fe9ad1f4c traceID=988b9d8e984c98bf88d12e5db10bd987
	Sep 27 00:23:39 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:23:39.945119098Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=dc9e9c8fe9ad1f4c traceID=988b9d8e984c98bf88d12e5db10bd987
	Sep 27 00:26:00 ubuntu-20-agent-9 cri-dockerd[127689]: time="2024-09-27T00:26:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/05a0c40afc1b105ed2b99466a8ded414220e7d66cc4fd14513ed8537f9533408/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west2-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 27 00:26:00 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:26:00.968223814Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=2b643a0a4af457ce traceID=381265c75db230cfacf9a4cecad65a53
	Sep 27 00:26:00 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:26:00.970742929Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=2b643a0a4af457ce traceID=381265c75db230cfacf9a4cecad65a53
	Sep 27 00:26:11 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:26:11.945337285Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=da6241dedf25dedc traceID=c8b5153b61ee4151b35f2ea417ecc055
	Sep 27 00:26:11 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:26:11.947699648Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=da6241dedf25dedc traceID=c8b5153b61ee4151b35f2ea417ecc055
	Sep 27 00:26:34 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:26:34.941175451Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=6fa477e395867ce7 traceID=1034f779699e25e4f45d57ebeda1497d
	Sep 27 00:26:34 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:26:34.943412390Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=6fa477e395867ce7 traceID=1034f779699e25e4f45d57ebeda1497d
	Sep 27 00:27:00 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:27:00.392499477Z" level=info msg="ignoring event" container=05a0c40afc1b105ed2b99466a8ded414220e7d66cc4fd14513ed8537f9533408 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:27:00 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:27:00.678222896Z" level=info msg="ignoring event" container=c3ee262cb7bba78957050d7ba4b23a0535dc1f6167249c55de96b973f71504a9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:27:00 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:27:00.731925162Z" level=info msg="ignoring event" container=8af29fe74f1ff0e2b8b18d87477eb857c144884cad4c33ddc6f70ba03d5df1da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:27:00 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:27:00.813428728Z" level=info msg="ignoring event" container=758e8d161387b2b912d19e92304cb494569c51650f613d7ff053e817484e383e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 27 00:27:00 ubuntu-20-agent-9 dockerd[127360]: time="2024-09-27T00:27:00.891019276Z" level=info msg="ignoring event" container=a6ee60d7c438368452a602c57d7e4c3406b0d4ba690c8da561c1ec2e78f47991 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	ce0f95495465f       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            4 minutes ago       Exited              gadget                                   6                   7a11cf51cdfef       gadget-zsrbc
	ca7eac916e6f1       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   9a54d61c131b8       gcp-auth-89d5ffd79-68r2k
	06b7ac20f7718       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          10 minutes ago      Running             csi-snapshotter                          0                   81523afd9c74d       csi-hostpathplugin-9646r
	e8960bbdc90f0       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          10 minutes ago      Running             csi-provisioner                          0                   81523afd9c74d       csi-hostpathplugin-9646r
	820915b2ac3b6       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            10 minutes ago      Running             liveness-probe                           0                   81523afd9c74d       csi-hostpathplugin-9646r
	d0814b540f12e       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           10 minutes ago      Running             hostpath                                 0                   81523afd9c74d       csi-hostpathplugin-9646r
	8e1d359469553       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                10 minutes ago      Running             node-driver-registrar                    0                   81523afd9c74d       csi-hostpathplugin-9646r
	00b4ac66c18f9       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              10 minutes ago      Running             csi-resizer                              0                   302ea4417927a       csi-hostpath-resizer-0
	89d52d576f143       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   10 minutes ago      Running             csi-external-health-monitor-controller   0                   81523afd9c74d       csi-hostpathplugin-9646r
	462b3fa12bda1       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             10 minutes ago      Running             csi-attacher                             0                   9d5d8cd6fa6e7       csi-hostpath-attacher-0
	adbb46dace6f0       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   342cd307b62e2       snapshot-controller-56fcc65765-2wjqz
	b9df1158720a1       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   9d1cc670d574a       snapshot-controller-56fcc65765-g8jrp
	adba821d320b5       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       10 minutes ago      Running             local-path-provisioner                   0                   b4a2cc2a14139       local-path-provisioner-86d989889c-knfxk
	8af29fe74f1ff       gcr.io/k8s-minikube/kube-registry-proxy@sha256:9fd683b2e47c5fded3410c69f414f05cdee737597569f52854347f889b118982                              10 minutes ago      Exited              registry-proxy                           0                   a6ee60d7c4383       registry-proxy-rbxpj
	c3ee262cb7bba       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                                             10 minutes ago      Exited              registry                                 0                   758e8d161387b       registry-66c9cd494c-5zfg4
	73934b8e99884       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        10 minutes ago      Running             metrics-server                           0                   735c3f6e6084f       metrics-server-84c5f94fbc-zb9hk
	720baba6a0a50       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        10 minutes ago      Running             yakd                                     0                   f5cf42776ab9f       yakd-dashboard-67d98fc6b-pn7ph
	0ea696ac52da5       gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e                               11 minutes ago      Running             cloud-spanner-emulator                   0                   5a34cb10739d6       cloud-spanner-emulator-5b584cc74-gppng
	cc4778fb3760c       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     11 minutes ago      Running             nvidia-device-plugin-ctr                 0                   cb890f7ad36eb       nvidia-device-plugin-daemonset-rkscq
	e7fc0464842cb       c69fa2e9cbf5f                                                                                                                                11 minutes ago      Running             coredns                                  0                   5e3ff354ea15c       coredns-7c65d6cfc9-ngvr4
	04c3b6319c92f       6e38f40d628db                                                                                                                                11 minutes ago      Running             storage-provisioner                      0                   aaaaca261e98f       storage-provisioner
	4bbbf5ccddce6       60c005f310ff3                                                                                                                                11 minutes ago      Running             kube-proxy                               0                   edd77aaf6a0c9       kube-proxy-r2kqg
	234705c660c04       175ffd71cce3d                                                                                                                                11 minutes ago      Running             kube-controller-manager                  0                   9c2068a7695da       kube-controller-manager-ubuntu-20-agent-9
	dbeeaa776b168       2e96e5913fc06                                                                                                                                11 minutes ago      Running             etcd                                     0                   88f014fe5dc15       etcd-ubuntu-20-agent-9
	dd123a1910a91       9aa1fad941575                                                                                                                                11 minutes ago      Running             kube-scheduler                           0                   efe699020245d       kube-scheduler-ubuntu-20-agent-9
	9a96c9e4e13e6       6bab7719df100                                                                                                                                11 minutes ago      Running             kube-apiserver                           0                   7099bd6337394       kube-apiserver-ubuntu-20-agent-9
	
	
	==> coredns [e7fc0464842c] <==
	[INFO] 10.244.0.9:33024 - 57761 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000215701s
	[INFO] 10.244.0.9:59900 - 61589 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000082388s
	[INFO] 10.244.0.9:59900 - 61300 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096587s
	[INFO] 10.244.0.9:49832 - 6045 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000088311s
	[INFO] 10.244.0.9:49832 - 5603 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000114274s
	[INFO] 10.244.0.9:47003 - 11511 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000127998s
	[INFO] 10.244.0.9:47003 - 11170 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000116317s
	[INFO] 10.244.0.9:33332 - 11698 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000085891s
	[INFO] 10.244.0.9:33332 - 11435 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000085608s
	[INFO] 10.244.0.9:41602 - 16135 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000078984s
	[INFO] 10.244.0.9:41602 - 16550 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000140059s
	[INFO] 10.244.0.23:52742 - 57224 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000304535s
	[INFO] 10.244.0.23:39146 - 34841 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000381091s
	[INFO] 10.244.0.23:51632 - 34204 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000161995s
	[INFO] 10.244.0.23:39105 - 19433 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00021772s
	[INFO] 10.244.0.23:48202 - 26605 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000135991s
	[INFO] 10.244.0.23:50207 - 30094 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000188839s
	[INFO] 10.244.0.23:33285 - 57009 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.002580052s
	[INFO] 10.244.0.23:46183 - 64236 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.002562664s
	[INFO] 10.244.0.23:50335 - 27680 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004121422s
	[INFO] 10.244.0.23:48641 - 26642 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00472598s
	[INFO] 10.244.0.23:37195 - 56263 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.002239937s
	[INFO] 10.244.0.23:47851 - 38383 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003468041s
	[INFO] 10.244.0.23:39185 - 39105 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002931801s
	[INFO] 10.244.0.23:34588 - 24477 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.003062669s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-9
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-9
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eee16a295c071ed5a0e96cbbc00bcd13b2654625
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_27T00_15_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-9
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-9"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 27 Sep 2024 00:15:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-9
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 27 Sep 2024 00:26:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 27 Sep 2024 00:22:47 +0000   Fri, 27 Sep 2024 00:15:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 27 Sep 2024 00:22:47 +0000   Fri, 27 Sep 2024 00:15:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 27 Sep 2024 00:22:47 +0000   Fri, 27 Sep 2024 00:15:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 27 Sep 2024 00:22:47 +0000   Fri, 27 Sep 2024 00:15:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.154.0.4
	  Hostname:    ubuntu-20-agent-9
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                4894487b-7b30-e033-3a9d-c6f45b6c4cf8
	  Boot ID:                    3c2d51bd-7f5b-4d40-a494-e8e3ec27c9f9
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  default                     cloud-spanner-emulator-5b584cc74-gppng       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-zsrbc                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-68r2k                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-ngvr4                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-9646r                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ubuntu-20-agent-9                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kube-apiserver-ubuntu-20-agent-9             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-9    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-r2kqg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ubuntu-20-agent-9             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-zb9hk              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-rkscq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-2wjqz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-g8jrp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-knfxk      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-pn7ph               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 11m   kube-proxy       
	  Normal   Starting                 11m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  11m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m   kubelet          Node ubuntu-20-agent-9 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m   kubelet          Node ubuntu-20-agent-9 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m   kubelet          Node ubuntu-20-agent-9 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m   node-controller  Node ubuntu-20-agent-9 event: Registered Node ubuntu-20-agent-9 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3a 75 f0 ba e0 bb 08 06
	[  +1.354370] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 cb 1b 34 5d 43 08 06
	[  +0.010250] IPv4: martian source 10.244.0.1 from 10.244.0.12, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 f3 06 28 20 57 08 06
	[  +2.816799] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a 96 05 bc 08 21 08 06
	[  +1.771058] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 ad cf 8b 10 5d 08 06
	[  +2.020097] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 62 c8 87 06 4d 2a 08 06
	[  +5.837384] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 d4 6d c5 b4 4c 08 06
	[  +0.062701] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff aa 17 bb bd 4e 31 08 06
	[  +0.126366] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e2 02 b5 7b d0 2a 08 06
	[ +28.674986] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a e2 2c 32 5b 91 08 06
	[  +0.031487] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 72 dd 84 17 3a 08 06
	[Sep27 00:17] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 2a 49 a4 35 f3 08 06
	[  +0.000518] IPv4: martian source 10.244.0.23 from 10.244.0.6, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 52 ac 46 8e 90 7d 08 06
	
	
	==> etcd [dbeeaa776b16] <==
	{"level":"info","ts":"2024-09-27T00:15:34.383390Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-27T00:15:34.383415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a received MsgPreVoteResp from 82d4d36e40f9b4a at term 1"}
	{"level":"info","ts":"2024-09-27T00:15:34.383436Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a became candidate at term 2"}
	{"level":"info","ts":"2024-09-27T00:15:34.383443Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a received MsgVoteResp from 82d4d36e40f9b4a at term 2"}
	{"level":"info","ts":"2024-09-27T00:15:34.383455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"82d4d36e40f9b4a became leader at term 2"}
	{"level":"info","ts":"2024-09-27T00:15:34.383465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 82d4d36e40f9b4a elected leader 82d4d36e40f9b4a at term 2"}
	{"level":"info","ts":"2024-09-27T00:15:34.384290Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T00:15:34.384778Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T00:15:34.384776Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"82d4d36e40f9b4a","local-member-attributes":"{Name:ubuntu-20-agent-9 ClientURLs:[https://10.154.0.4:2379]}","request-path":"/0/members/82d4d36e40f9b4a/attributes","cluster-id":"7cf21852ad6c12ab","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-27T00:15:34.384805Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-27T00:15:34.385013Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7cf21852ad6c12ab","local-member-id":"82d4d36e40f9b4a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T00:15:34.385085Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-27T00:15:34.385104Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-27T00:15:34.385124Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T00:15:34.385153Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-27T00:15:34.386524Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T00:15:34.387499Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-27T00:15:34.388097Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.154.0.4:2379"}
	{"level":"info","ts":"2024-09-27T00:15:34.388592Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-27T00:15:57.063883Z","caller":"traceutil/trace.go:171","msg":"trace[223436893] transaction","detail":"{read_only:false; response_revision:877; number_of_response:1; }","duration":"137.954747ms","start":"2024-09-27T00:15:56.925907Z","end":"2024-09-27T00:15:57.063862Z","steps":["trace[223436893] 'process raft request'  (duration: 129.183297ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-27T00:16:02.937749Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.196652ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-27T00:16:02.937845Z","caller":"traceutil/trace.go:171","msg":"trace[1777800074] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:902; }","duration":"113.320376ms","start":"2024-09-27T00:16:02.824508Z","end":"2024-09-27T00:16:02.937829Z","steps":["trace[1777800074] 'range keys from in-memory index tree'  (duration: 113.117232ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-27T00:25:34.439520Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1703}
	{"level":"info","ts":"2024-09-27T00:25:34.463116Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1703,"took":"22.957679ms","hash":2922577851,"current-db-size-bytes":8384512,"current-db-size":"8.4 MB","current-db-size-in-use-bytes":4362240,"current-db-size-in-use":"4.4 MB"}
	{"level":"info","ts":"2024-09-27T00:25:34.463182Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2922577851,"revision":1703,"compact-revision":-1}
	
	
	==> gcp-auth [ca7eac916e6f] <==
	2024/09/27 00:17:06 GCP Auth Webhook started!
	2024/09/27 00:17:23 Ready to marshal response ...
	2024/09/27 00:17:23 Ready to write response ...
	2024/09/27 00:17:23 Ready to marshal response ...
	2024/09/27 00:17:23 Ready to write response ...
	2024/09/27 00:17:48 Ready to marshal response ...
	2024/09/27 00:17:48 Ready to write response ...
	2024/09/27 00:17:48 Ready to marshal response ...
	2024/09/27 00:17:48 Ready to write response ...
	2024/09/27 00:17:48 Ready to marshal response ...
	2024/09/27 00:17:48 Ready to write response ...
	2024/09/27 00:26:00 Ready to marshal response ...
	2024/09/27 00:26:00 Ready to write response ...
	
	
	==> kernel <==
	 00:27:01 up  2:09,  0 users,  load average: 0.15, 0.34, 0.39
	Linux ubuntu-20-agent-9 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [9a96c9e4e13e] <==
	W0927 00:16:30.677780       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.91.226:443: connect: connection refused
	W0927 00:16:31.085501       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.124.116:443: connect: connection refused
	E0927 00:16:31.085547       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.124.116:443: connect: connection refused" logger="UnhandledError"
	W0927 00:16:53.098408       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.124.116:443: connect: connection refused
	E0927 00:16:53.098451       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.124.116:443: connect: connection refused" logger="UnhandledError"
	W0927 00:16:53.112089       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.124.116:443: connect: connection refused
	E0927 00:16:53.112201       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.111.124.116:443: connect: connection refused" logger="UnhandledError"
	I0927 00:17:23.385752       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0927 00:17:23.404645       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0927 00:17:37.860763       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0927 00:17:37.880102       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0927 00:17:37.975732       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0927 00:17:37.991821       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0927 00:17:38.016645       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0927 00:17:38.017662       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0927 00:17:38.186693       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0927 00:17:38.204385       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0927 00:17:38.226569       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0927 00:17:38.910654       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0927 00:17:39.033990       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0927 00:17:39.046922       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0927 00:17:39.124174       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0927 00:17:39.227003       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0927 00:17:39.305736       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0927 00:17:39.430827       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	
	
	==> kube-controller-manager [234705c660c0] <==
	W0927 00:25:59.316067       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:25:59.316108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:26:01.442959       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:26:01.443024       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:26:08.646840       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:26:08.646889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:26:12.501731       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:26:12.501781       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:26:23.950067       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:26:23.950115       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:26:27.074730       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:26:27.074779       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:26:28.608445       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:26:28.608490       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:26:48.633823       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:26:48.633868       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:26:50.548199       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:26:50.548245       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:26:55.725638       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:26:55.725691       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:26:58.030860       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:26:58.030903       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0927 00:26:58.894926       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0927 00:26:58.894976       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0927 00:27:00.638165       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="7.757µs"
	
	
	==> kube-proxy [4bbbf5ccddce] <==
	I0927 00:15:44.189088       1 server_linux.go:66] "Using iptables proxy"
	I0927 00:15:44.339338       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.154.0.4"]
	E0927 00:15:44.339521       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0927 00:15:44.502667       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0927 00:15:44.502744       1 server_linux.go:169] "Using iptables Proxier"
	I0927 00:15:44.569560       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0927 00:15:44.570047       1 server.go:483] "Version info" version="v1.31.1"
	I0927 00:15:44.570077       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0927 00:15:44.572381       1 config.go:199] "Starting service config controller"
	I0927 00:15:44.572414       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0927 00:15:44.572439       1 config.go:105] "Starting endpoint slice config controller"
	I0927 00:15:44.572443       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0927 00:15:44.572950       1 config.go:328] "Starting node config controller"
	I0927 00:15:44.572960       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0927 00:15:44.673419       1 shared_informer.go:320] Caches are synced for node config
	I0927 00:15:44.673464       1 shared_informer.go:320] Caches are synced for service config
	I0927 00:15:44.673479       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [dd123a1910a9] <==
	W0927 00:15:35.354017       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0927 00:15:35.354029       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0927 00:15:35.354040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0927 00:15:35.354042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:35.354105       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0927 00:15:35.354122       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:36.270969       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0927 00:15:36.271010       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:36.382268       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0927 00:15:36.382308       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:36.468125       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0927 00:15:36.468168       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:36.501672       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0927 00:15:36.501721       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:36.501760       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0927 00:15:36.501796       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:36.535404       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0927 00:15:36.535454       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:36.542953       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0927 00:15:36.543001       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:36.590201       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0927 00:15:36.590468       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0927 00:15:36.728018       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0927 00:15:36.728073       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0927 00:15:38.451719       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Fri 2024-09-20 09:35:01 UTC, end at Fri 2024-09-27 00:27:01 UTC. --
	Sep 27 00:26:48 ubuntu-20-agent-9 kubelet[128562]: I0927 00:26:48.789467  128562 scope.go:117] "RemoveContainer" containerID="ce0f95495465f6082ad910b695a398fd1abb55f85605cdea136b77cefb462fe2"
	Sep 27 00:26:48 ubuntu-20-agent-9 kubelet[128562]: E0927 00:26:48.789649  128562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-zsrbc_gadget(8164a4a1-7793-4235-8a51-19ef23903995)\"" pod="gadget/gadget-zsrbc" podUID="8164a4a1-7793-4235-8a51-19ef23903995"
	Sep 27 00:26:49 ubuntu-20-agent-9 kubelet[128562]: E0927 00:26:49.791351  128562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="0a623e67-b01b-4dcd-b6c9-32493ac56396"
	Sep 27 00:26:53 ubuntu-20-agent-9 kubelet[128562]: E0927 00:26:53.791527  128562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="d20a513a-e4b3-49de-9860-4ea508ac296a"
	Sep 27 00:26:57 ubuntu-20-agent-9 kubelet[128562]: I0927 00:26:57.790252  128562 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-rbxpj" secret="" err="secret \"gcp-auth\" not found"
	Sep 27 00:27:00 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:00.573105  128562 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0a623e67-b01b-4dcd-b6c9-32493ac56396-gcp-creds\") pod \"0a623e67-b01b-4dcd-b6c9-32493ac56396\" (UID: \"0a623e67-b01b-4dcd-b6c9-32493ac56396\") "
	Sep 27 00:27:00 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:00.573158  128562 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66kzv\" (UniqueName: \"kubernetes.io/projected/0a623e67-b01b-4dcd-b6c9-32493ac56396-kube-api-access-66kzv\") pod \"0a623e67-b01b-4dcd-b6c9-32493ac56396\" (UID: \"0a623e67-b01b-4dcd-b6c9-32493ac56396\") "
	Sep 27 00:27:00 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:00.573189  128562 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a623e67-b01b-4dcd-b6c9-32493ac56396-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "0a623e67-b01b-4dcd-b6c9-32493ac56396" (UID: "0a623e67-b01b-4dcd-b6c9-32493ac56396"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 27 00:27:00 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:00.573272  128562 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0a623e67-b01b-4dcd-b6c9-32493ac56396-gcp-creds\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
	Sep 27 00:27:00 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:00.575095  128562 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a623e67-b01b-4dcd-b6c9-32493ac56396-kube-api-access-66kzv" (OuterVolumeSpecName: "kube-api-access-66kzv") pod "0a623e67-b01b-4dcd-b6c9-32493ac56396" (UID: "0a623e67-b01b-4dcd-b6c9-32493ac56396"). InnerVolumeSpecName "kube-api-access-66kzv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:27:00 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:00.675303  128562 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-66kzv\" (UniqueName: \"kubernetes.io/projected/0a623e67-b01b-4dcd-b6c9-32493ac56396-kube-api-access-66kzv\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
	Sep 27 00:27:00 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:00.977485  128562 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h578x\" (UniqueName: \"kubernetes.io/projected/32dd9391-b30e-4231-9d9e-8bd0457919d8-kube-api-access-h578x\") pod \"32dd9391-b30e-4231-9d9e-8bd0457919d8\" (UID: \"32dd9391-b30e-4231-9d9e-8bd0457919d8\") "
	Sep 27 00:27:00 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:00.979841  128562 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32dd9391-b30e-4231-9d9e-8bd0457919d8-kube-api-access-h578x" (OuterVolumeSpecName: "kube-api-access-h578x") pod "32dd9391-b30e-4231-9d9e-8bd0457919d8" (UID: "32dd9391-b30e-4231-9d9e-8bd0457919d8"). InnerVolumeSpecName "kube-api-access-h578x". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:27:01 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:01.077830  128562 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnszc\" (UniqueName: \"kubernetes.io/projected/ae04301c-b1c9-4a19-af2e-04bc0071e797-kube-api-access-nnszc\") pod \"ae04301c-b1c9-4a19-af2e-04bc0071e797\" (UID: \"ae04301c-b1c9-4a19-af2e-04bc0071e797\") "
	Sep 27 00:27:01 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:01.077901  128562 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-h578x\" (UniqueName: \"kubernetes.io/projected/32dd9391-b30e-4231-9d9e-8bd0457919d8-kube-api-access-h578x\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
	Sep 27 00:27:01 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:01.079736  128562 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae04301c-b1c9-4a19-af2e-04bc0071e797-kube-api-access-nnszc" (OuterVolumeSpecName: "kube-api-access-nnszc") pod "ae04301c-b1c9-4a19-af2e-04bc0071e797" (UID: "ae04301c-b1c9-4a19-af2e-04bc0071e797"). InnerVolumeSpecName "kube-api-access-nnszc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 27 00:27:01 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:01.178754  128562 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nnszc\" (UniqueName: \"kubernetes.io/projected/ae04301c-b1c9-4a19-af2e-04bc0071e797-kube-api-access-nnszc\") on node \"ubuntu-20-agent-9\" DevicePath \"\""
	Sep 27 00:27:01 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:01.358391  128562 scope.go:117] "RemoveContainer" containerID="8af29fe74f1ff0e2b8b18d87477eb857c144884cad4c33ddc6f70ba03d5df1da"
	Sep 27 00:27:01 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:01.377973  128562 scope.go:117] "RemoveContainer" containerID="8af29fe74f1ff0e2b8b18d87477eb857c144884cad4c33ddc6f70ba03d5df1da"
	Sep 27 00:27:01 ubuntu-20-agent-9 kubelet[128562]: E0927 00:27:01.380177  128562 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 8af29fe74f1ff0e2b8b18d87477eb857c144884cad4c33ddc6f70ba03d5df1da" containerID="8af29fe74f1ff0e2b8b18d87477eb857c144884cad4c33ddc6f70ba03d5df1da"
	Sep 27 00:27:01 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:01.380254  128562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"8af29fe74f1ff0e2b8b18d87477eb857c144884cad4c33ddc6f70ba03d5df1da"} err="failed to get container status \"8af29fe74f1ff0e2b8b18d87477eb857c144884cad4c33ddc6f70ba03d5df1da\": rpc error: code = Unknown desc = Error response from daemon: No such container: 8af29fe74f1ff0e2b8b18d87477eb857c144884cad4c33ddc6f70ba03d5df1da"
	Sep 27 00:27:01 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:01.380298  128562 scope.go:117] "RemoveContainer" containerID="c3ee262cb7bba78957050d7ba4b23a0535dc1f6167249c55de96b973f71504a9"
	Sep 27 00:27:01 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:01.406186  128562 scope.go:117] "RemoveContainer" containerID="c3ee262cb7bba78957050d7ba4b23a0535dc1f6167249c55de96b973f71504a9"
	Sep 27 00:27:01 ubuntu-20-agent-9 kubelet[128562]: E0927 00:27:01.407411  128562 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: c3ee262cb7bba78957050d7ba4b23a0535dc1f6167249c55de96b973f71504a9" containerID="c3ee262cb7bba78957050d7ba4b23a0535dc1f6167249c55de96b973f71504a9"
	Sep 27 00:27:01 ubuntu-20-agent-9 kubelet[128562]: I0927 00:27:01.407462  128562 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"c3ee262cb7bba78957050d7ba4b23a0535dc1f6167249c55de96b973f71504a9"} err="failed to get container status \"c3ee262cb7bba78957050d7ba4b23a0535dc1f6167249c55de96b973f71504a9\": rpc error: code = Unknown desc = Error response from daemon: No such container: c3ee262cb7bba78957050d7ba4b23a0535dc1f6167249c55de96b973f71504a9"
	
	
	==> storage-provisioner [04c3b6319c92] <==
	I0927 00:15:44.736628       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0927 00:15:44.753330       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0927 00:15:44.753413       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0927 00:15:44.763082       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0927 00:15:44.763304       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-9_6a70a112-d043-4475-acee-e9cda686ee4c!
	I0927 00:15:44.763427       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"82539bb9-ad30-48bd-a0fd-ef0aafd10987", APIVersion:"v1", ResourceVersion:"577", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-9_6a70a112-d043-4475-acee-e9cda686ee4c became leader
	I0927 00:15:44.864472       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-9_6a70a112-d043-4475-acee-e9cda686ee4c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ubuntu-20-agent-9/10.154.0.4
	Start Time:       Fri, 27 Sep 2024 00:17:48 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-brwb6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-brwb6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m14s                  default-scheduler  Successfully assigned default/busybox to ubuntu-20-agent-9
	  Normal   Pulling    7m42s (x4 over 9m13s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m42s (x4 over 9m13s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m42s (x4 over 9m13s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m27s (x6 over 9m13s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m1s (x21 over 9m13s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (71.91s)

                                                
                                    

Test pass (104/166)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 1.85
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 1.4
15 TestDownloadOnly/v1.31.1/binaries 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.12
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.57
22 TestOffline 40.86
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 102.13
29 TestAddons/serial/Volcano 40.14
31 TestAddons/serial/GCPAuth/Namespaces 0.12
35 TestAddons/parallel/InspektorGadget 10.48
36 TestAddons/parallel/MetricsServer 5.39
38 TestAddons/parallel/CSI 43.43
39 TestAddons/parallel/Headlamp 49.89
40 TestAddons/parallel/CloudSpanner 5.27
42 TestAddons/parallel/NvidiaDevicePlugin 6.23
43 TestAddons/parallel/Yakd 10.42
44 TestAddons/StoppedEnableDisable 10.69
46 TestCertExpiration 228.06
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 31.06
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 30.19
61 TestFunctional/serial/KubeContext 0.05
62 TestFunctional/serial/KubectlGetPods 0.07
64 TestFunctional/serial/MinikubeKubectlCmd 0.11
65 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
66 TestFunctional/serial/ExtraConfig 35.15
67 TestFunctional/serial/ComponentHealth 0.07
68 TestFunctional/serial/LogsCmd 0.84
69 TestFunctional/serial/LogsFileCmd 0.85
70 TestFunctional/serial/InvalidService 4.95
72 TestFunctional/parallel/ConfigCmd 0.29
73 TestFunctional/parallel/DashboardCmd 4.81
74 TestFunctional/parallel/DryRun 0.17
75 TestFunctional/parallel/InternationalLanguage 0.09
76 TestFunctional/parallel/StatusCmd 0.44
79 TestFunctional/parallel/ProfileCmd/profile_not_create 0.23
80 TestFunctional/parallel/ProfileCmd/profile_list 0.21
81 TestFunctional/parallel/ProfileCmd/profile_json_output 0.22
83 TestFunctional/parallel/ServiceCmd/DeployApp 10.15
84 TestFunctional/parallel/ServiceCmd/List 0.34
85 TestFunctional/parallel/ServiceCmd/JSONOutput 0.34
86 TestFunctional/parallel/ServiceCmd/HTTPS 0.16
87 TestFunctional/parallel/ServiceCmd/Format 0.16
88 TestFunctional/parallel/ServiceCmd/URL 0.15
89 TestFunctional/parallel/ServiceCmdConnect 8.32
90 TestFunctional/parallel/AddonsCmd 0.12
91 TestFunctional/parallel/PersistentVolumeClaim 21.62
104 TestFunctional/parallel/MySQL 22.99
108 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
109 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 14.26
110 TestFunctional/parallel/UpdateContextCmd/no_clusters 13.81
113 TestFunctional/parallel/NodeLabels 0.06
117 TestFunctional/parallel/Version/short 0.05
118 TestFunctional/parallel/Version/components 0.24
119 TestFunctional/parallel/License 0.96
120 TestFunctional/delete_echo-server_images 0.03
121 TestFunctional/delete_my-image_image 0.02
122 TestFunctional/delete_minikube_cached_images 0.02
127 TestImageBuild/serial/Setup 14.14
128 TestImageBuild/serial/NormalBuild 2.94
129 TestImageBuild/serial/BuildWithBuildArg 0.92
130 TestImageBuild/serial/BuildWithDockerIgnore 0.74
131 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.75
135 TestJSONOutput/start/Command 25
136 TestJSONOutput/start/Audit 0
138 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
139 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
141 TestJSONOutput/pause/Command 0.54
142 TestJSONOutput/pause/Audit 0
144 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
145 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
147 TestJSONOutput/unpause/Command 0.42
148 TestJSONOutput/unpause/Audit 0
150 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
151 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
153 TestJSONOutput/stop/Command 10.44
154 TestJSONOutput/stop/Audit 0
156 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
157 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
158 TestErrorJSONOutput 0.2
163 TestMainNoArgs 0.05
164 TestMinikubeProfile 34.86
172 TestPause/serial/Start 28.55
173 TestPause/serial/SecondStartNoReconfiguration 23.59
174 TestPause/serial/Pause 0.51
175 TestPause/serial/VerifyStatus 0.14
176 TestPause/serial/Unpause 0.4
177 TestPause/serial/PauseAgain 0.57
178 TestPause/serial/DeletePaused 1.79
179 TestPause/serial/VerifyDeletedResources 0.06
193 TestRunningBinaryUpgrade 76.13
195 TestStoppedBinaryUpgrade/Setup 2.09
196 TestStoppedBinaryUpgrade/Upgrade 51.1
197 TestStoppedBinaryUpgrade/MinikubeLogs 0.83
198 TestKubernetesUpgrade 316.7
x
+
TestDownloadOnly/v1.20.0/json-events (1.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (1.847219978s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (1.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (59.748826ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:14:40
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:14:40.402867  123261 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:14:40.402991  123261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:14:40.402999  123261 out.go:358] Setting ErrFile to fd 2...
	I0927 00:14:40.403004  123261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:14:40.403194  123261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-116460/.minikube/bin
	W0927 00:14:40.403370  123261 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19711-116460/.minikube/config/config.json: open /home/jenkins/minikube-integration/19711-116460/.minikube/config/config.json: no such file or directory
	I0927 00:14:40.403937  123261 out.go:352] Setting JSON to true
	I0927 00:14:40.404900  123261 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7018,"bootTime":1727389062,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 00:14:40.405033  123261 start.go:139] virtualization: kvm guest
	I0927 00:14:40.407604  123261 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0927 00:14:40.407736  123261 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19711-116460/.minikube/cache/preloaded-tarball: no such file or directory
	I0927 00:14:40.407805  123261 notify.go:220] Checking for updates...
	I0927 00:14:40.409134  123261 out.go:169] MINIKUBE_LOCATION=19711
	I0927 00:14:40.410774  123261 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:14:40.412218  123261 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19711-116460/kubeconfig
	I0927 00:14:40.413362  123261 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-116460/.minikube
	I0927 00:14:40.414894  123261 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (1.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (1.403704549s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (1.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
--- PASS: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (60.099327ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC | 27 Sep 24 00:14 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 27 Sep 24 00:14 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/27 00:14:42
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0927 00:14:42.554224  123414 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:14:42.554344  123414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:14:42.554353  123414 out.go:358] Setting ErrFile to fd 2...
	I0927 00:14:42.554358  123414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:14:42.554537  123414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-116460/.minikube/bin
	I0927 00:14:42.555091  123414 out.go:352] Setting JSON to true
	I0927 00:14:42.555897  123414 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7021,"bootTime":1727389062,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 00:14:42.555996  123414 start.go:139] virtualization: kvm guest
	I0927 00:14:42.558117  123414 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0927 00:14:42.558227  123414 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19711-116460/.minikube/cache/preloaded-tarball: no such file or directory
	I0927 00:14:42.558278  123414 notify.go:220] Checking for updates...
	I0927 00:14:42.559875  123414 out.go:169] MINIKUBE_LOCATION=19711
	I0927 00:14:42.561323  123414 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:14:42.562492  123414 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19711-116460/kubeconfig
	I0927 00:14:42.563616  123414 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-116460/.minikube
	I0927 00:14:42.564682  123414 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
I0927 00:14:44.488707  123249 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:41241 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (40.86s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (39.21933229s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.64097466s)
--- PASS: TestOffline (40.86s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (48.25984ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (48.148947ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (102.13s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm: (1m42.126049548s)
--- PASS: TestAddons/Setup (102.13s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.14s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:843: volcano-admission stabilized in 8.732204ms
addons_test.go:851: volcano-controller stabilized in 8.822583ms
addons_test.go:835: volcano-scheduler stabilized in 8.870867ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-4nv6b" [991cdf18-6c5c-45b3-9931-44299d6d03f6] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003110036s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-2c6h7" [f2e34c11-285e-4dff-9faa-bd42078e47fd] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004002871s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-jnz49" [e70fa7a8-cf7b-4435-9aca-f3f150a8baaf] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004272088s
addons_test.go:870: (dbg) Run:  kubectl --context minikube delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context minikube create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [eb888b6f-e50a-455e-9cb7-91f4a1e192a8] Pending
helpers_test.go:344: "test-job-nginx-0" [eb888b6f-e50a-455e-9cb7-91f4a1e192a8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [eb888b6f-e50a-455e-9cb7-91f4a1e192a8] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003741513s
addons_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.817139115s)
--- PASS: TestAddons/serial/Volcano (40.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.48s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-zsrbc" [8164a4a1-7793-4235-8a51-19ef23903995] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003877268s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube: (5.470922582s)
--- PASS: TestAddons/parallel/InspektorGadget (10.48s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.39s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.414481ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-zb9hk" [8f9b05a4-44d3-4031-a273-6c55abe9fb84] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004021195s
addons_test.go:413: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.39s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.43s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
I0927 00:27:17.912958  123249 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0927 00:27:17.917505  123249 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0927 00:27:17.917535  123249 kapi.go:107] duration metric: took 4.591261ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 4.602342ms
addons_test.go:508: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [89575d74-0cd2-4229-9b15-7d0d9286c478] Pending
helpers_test.go:344: "task-pv-pod" [89575d74-0cd2-4229-9b15-7d0d9286c478] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [89575d74-0cd2-4229-9b15-7d0d9286c478] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003266882s
addons_test.go:528: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context minikube delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b143679c-3ce2-4fe1-b3e6-47b837ef2464] Pending
helpers_test.go:344: "task-pv-pod-restore" [b143679c-3ce2-4fe1-b3e6-47b837ef2464] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b143679c-3ce2-4fe1-b3e6-47b837ef2464] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003129097s
addons_test.go:570: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context minikube delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context minikube delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.309231178s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (43.43s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (49.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-sksvc" [42856d7d-1a04-4e48-a78a-886424f02faf] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-sksvc" [42856d7d-1a04-4e48-a78a-886424f02faf] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 44.003964206s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1: (5.412564323s)
--- PASS: TestAddons/parallel/Headlamp (49.89s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-gppng" [a2b5686d-2a1d-4bcb-972c-a41abc9c6435] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004045582s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.23s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-rkscq" [b8bdd9f8-c0ca-4711-bd0d-a07df1e4fded] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004016106s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p minikube
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.23s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-pn7ph" [811be57a-b324-4eae-b9c4-37e636c8bb47] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00365286s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.418168106s)
--- PASS: TestAddons/parallel/Yakd (10.42s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.69s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.383215287s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.69s)

                                                
                                    
x
+
TestCertExpiration (228.06s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (14.715793419s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (31.588512252s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.751806372s)
--- PASS: TestCertExpiration (228.06s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19711-116460/.minikube/files/etc/test/nested/copy/123249/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (31.06s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (31.057149067s)
--- PASS: TestFunctional/serial/StartWithProxy (31.06s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (30.19s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0927 00:33:44.165838  123249 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (30.189386313s)
functional_test.go:663: soft start took 30.190215645s for "minikube" cluster.
I0927 00:34:14.355595  123249 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (30.19s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.15s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.152692992s)
functional_test.go:761: restart took 35.152816826s for "minikube" cluster.
I0927 00:34:49.834241  123249 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (35.15s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.84s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd209766803/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.85s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.95s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (172.429158ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL           |
	|-----------|-------------|-------------|-------------------------|
	| default   | invalid-svc |          80 | http://10.154.0.4:31521 |
	|-----------|-------------|-------------|-------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context minikube delete -f testdata/invalidsvc.yaml: (1.605871689s)
--- PASS: TestFunctional/serial/InvalidService (4.95s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (46.865842ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (45.292522ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (4.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2024/09/27 00:35:01 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 158343: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (4.81s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (87.837224ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-116460/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-116460/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:35:01.675398  158713 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:35:01.675670  158713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:35:01.675681  158713 out.go:358] Setting ErrFile to fd 2...
	I0927 00:35:01.675687  158713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:35:01.675875  158713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-116460/.minikube/bin
	I0927 00:35:01.676471  158713 out.go:352] Setting JSON to false
	I0927 00:35:01.677558  158713 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8240,"bootTime":1727389062,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 00:35:01.677665  158713 start.go:139] virtualization: kvm guest
	I0927 00:35:01.679950  158713 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0927 00:35:01.681795  158713 notify.go:220] Checking for updates...
	I0927 00:35:01.681808  158713 out.go:177]   - MINIKUBE_LOCATION=19711
	W0927 00:35:01.681786  158713 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19711-116460/.minikube/cache/preloaded-tarball: no such file or directory
	I0927 00:35:01.684790  158713 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:35:01.686152  158713 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-116460/kubeconfig
	I0927 00:35:01.687795  158713 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-116460/.minikube
	I0927 00:35:01.689519  158713 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 00:35:01.690940  158713 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:35:01.692962  158713 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:35:01.693369  158713 exec_runner.go:51] Run: systemctl --version
	I0927 00:35:01.696464  158713 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:35:01.709727  158713 out.go:177] * Using the none driver based on existing profile
	I0927 00:35:01.711310  158713 start.go:297] selected driver: none
	I0927 00:35:01.711332  158713 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home
/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:35:01.711458  158713 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:35:01.711482  158713 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0927 00:35:01.711756  158713 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0927 00:35:01.714137  158713 out.go:201] 
	W0927 00:35:01.715483  158713 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0927 00:35:01.716812  158713 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (89.481201ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-116460/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-116460/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0927 00:35:01.849934  158743 out.go:345] Setting OutFile to fd 1 ...
	I0927 00:35:01.850077  158743 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:35:01.850088  158743 out.go:358] Setting ErrFile to fd 2...
	I0927 00:35:01.850093  158743 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0927 00:35:01.850383  158743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19711-116460/.minikube/bin
	I0927 00:35:01.851066  158743 out.go:352] Setting JSON to false
	I0927 00:35:01.852266  158743 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8240,"bootTime":1727389062,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0927 00:35:01.852378  158743 start.go:139] virtualization: kvm guest
	I0927 00:35:01.854855  158743 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	W0927 00:35:01.856437  158743 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19711-116460/.minikube/cache/preloaded-tarball: no such file or directory
	I0927 00:35:01.856530  158743 out.go:177]   - MINIKUBE_LOCATION=19711
	I0927 00:35:01.856531  158743 notify.go:220] Checking for updates...
	I0927 00:35:01.858248  158743 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0927 00:35:01.859712  158743 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19711-116460/kubeconfig
	I0927 00:35:01.861304  158743 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-116460/.minikube
	I0927 00:35:01.862760  158743 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0927 00:35:01.864247  158743 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0927 00:35:01.866560  158743 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0927 00:35:01.867025  158743 exec_runner.go:51] Run: systemctl --version
	I0927 00:35:01.870061  158743 driver.go:394] Setting default libvirt URI to qemu:///system
	I0927 00:35:01.880429  158743 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0927 00:35:01.881953  158743 start.go:297] selected driver: none
	I0927 00:35:01.881977  158743 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.154.0.4 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home
/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0927 00:35:01.882142  158743 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0927 00:35:01.882174  158743 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0927 00:35:01.882683  158743 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0927 00:35:01.885052  158743 out.go:201] 
	W0927 00:35:01.886546  158743 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0927 00:35:01.888228  158743 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "160.940304ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "47.946116ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "169.354802ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "48.220888ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-bf5vf" [0df46781-b4ce-4ccc-be73-5ae345c22fe3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-bf5vf" [0df46781-b4ce-4ccc-be73-5ae345c22fe3] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.002860895s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1494: Took "338.616671ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://10.154.0.4:31577
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://10.154.0.4:31577
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-4rqwg" [eda11cd4-80ab-428e-be75-90205d18cfd8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-4rqwg" [eda11cd4-80ab-428e-be75-90205d18cfd8] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003336258s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://10.154.0.4:30702
functional_test.go:1675: http://10.154.0.4:30702: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-4rqwg

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.154.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.154.0.4:30702
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.32s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (21.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [87131349-1ed5-4c52-85bd-018bd75020ac] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00480948s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [68cc1ba8-5248-44e1-ae44-5964be18ed53] Pending
helpers_test.go:344: "sp-pod" [68cc1ba8-5248-44e1-ae44-5964be18ed53] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [68cc1ba8-5248-44e1-ae44-5964be18ed53] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004369782s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [961e710d-24b0-4d3b-8a3c-2f3e4144b195] Pending
helpers_test.go:344: "sp-pod" [961e710d-24b0-4d3b-8a3c-2f3e4144b195] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [961e710d-24b0-4d3b-8a3c-2f3e4144b195] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003428615s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (21.62s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-rh2hp" [11340b3c-0c04-49a1-884b-aa198c309eb2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-rh2hp" [11340b3c-0c04-49a1-884b-aa198c309eb2] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.004235826s
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-rh2hp -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-rh2hp -- mysql -ppassword -e "show databases;": exit status 1 (122.046544ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0927 00:36:01.700720  123249 retry.go:31] will retry after 989.805629ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-rh2hp -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-rh2hp -- mysql -ppassword -e "show databases;": exit status 1 (111.216523ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0927 00:36:02.802292  123249 retry.go:31] will retry after 1.703305573s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-rh2hp -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-rh2hp -- mysql -ppassword -e "show databases;": exit status 1 (116.486719ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0927 00:36:04.623797  123249 retry.go:31] will retry after 2.66459849s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-rh2hp -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.99s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (14.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (14.261699448s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (14.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (13.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (13.811082365s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (13.81s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.96s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (14.14s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.136734579s)
--- PASS: TestImageBuild/serial/Setup (14.14s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube: (2.93866576s)
--- PASS: TestImageBuild/serial/NormalBuild (2.94s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.92s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.74s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.74s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.75s)

                                                
                                    
x
+
TestJSONOutput/start/Command (25s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (24.996989363s)
--- PASS: TestJSONOutput/start/Command (25.00s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.54s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.54s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.42s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.42s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.44s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (10.442499167s)
--- PASS: TestJSONOutput/stop/Command (10.44s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.14802ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b26b3edc-e2e6-47a4-9d74-01b195d78fae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"93f4d9f7-e84d-4ce3-b289-b6858d7b0851","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19711"}}
	{"specversion":"1.0","id":"980b1ad1-89f2-4a93-9f1c-fd8eb8735492","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4b4ec58c-6fe8-4466-b285-c8104f75d657","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19711-116460/kubeconfig"}}
	{"specversion":"1.0","id":"fc65ad66-ac22-47e3-b0d8-42d9168fba23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-116460/.minikube"}}
	{"specversion":"1.0","id":"70004051-b3d9-42ca-a7db-e8eede5eacfc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"423bf867-3a24-4994-9d6a-a5543016d7e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"df9dcfd1-1794-4e9b-acd8-b7b6499582ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (34.86s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (15.003940605s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (17.932095369s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.309573702s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (34.86s)

                                                
                                    
x
+
TestPause/serial/Start (28.55s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (28.54562684s)
--- PASS: TestPause/serial/Start (28.55s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (23.59s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (23.587717718s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (23.59s)

                                                
                                    
x
+
TestPause/serial/Pause (0.51s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.51s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.14s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (142.243913ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.14s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.4s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.40s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.57s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.57s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.79s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.793618113s)
--- PASS: TestPause/serial/DeletePaused (1.79s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (76.13s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1615149881 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1615149881 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (33.238507361s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (37.027534074s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (3.118894649s)
--- PASS: TestRunningBinaryUpgrade (76.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (51.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2286850962 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2286850962 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (14.986273724s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2286850962 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2286850962 -p minikube stop: (23.760140983s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (12.357105431s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (51.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.83s)

                                                
                                    
x
+
TestKubernetesUpgrade (316.7s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (30.554501816s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.358600923s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (89.361174ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m16.681595658s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (70.470034ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19711
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19711-116460/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19711-116460/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (17.545045105s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.335750259s)
--- PASS: TestKubernetesUpgrade (316.70s)

                                                
                                    

Test skip (61/166)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.31.1/preload-exists 0
14 TestDownloadOnly/v1.31.1/cached-images 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Ingress 0
37 TestAddons/parallel/Olm 0
41 TestAddons/parallel/LocalPath 0
45 TestCertOptions 0
47 TestDockerFlags 0
48 TestForceSystemdFlag 0
49 TestForceSystemdEnv 0
50 TestDockerEnvContainerd 0
51 TestKVMDriverInstallOrUpdate 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
54 TestErrorSpam 0
63 TestFunctional/serial/CacheCmd 0
77 TestFunctional/parallel/MountCmd 0
94 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
95 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
96 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
97 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
98 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
99 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
100 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
101 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
102 TestFunctional/parallel/SSHCmd 0
103 TestFunctional/parallel/CpCmd 0
105 TestFunctional/parallel/FileSync 0
106 TestFunctional/parallel/CertSync 0
111 TestFunctional/parallel/DockerEnv 0
112 TestFunctional/parallel/PodmanEnv 0
114 TestFunctional/parallel/ImageCommands 0
115 TestFunctional/parallel/NonActiveRuntimeDisabled 0
123 TestGvisorAddon 0
124 TestMultiControlPlane 0
132 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
159 TestKicCustomNetwork 0
160 TestKicExistingNetwork 0
161 TestKicCustomSubnet 0
162 TestKicStaticIP 0
165 TestMountStart 0
166 TestMultiNode 0
167 TestNetworkPlugins 0
168 TestNoKubernetes 0
169 TestChangeNoneUser 0
180 TestPreload 0
181 TestScheduledStopWindows 0
182 TestScheduledStopUnix 0
183 TestSkaffold 0
186 TestStartStop/group/old-k8s-version 0.14
187 TestStartStop/group/newest-cni 0.14
188 TestStartStop/group/default-k8s-diff-port 0.13
189 TestStartStop/group/no-preload 0.14
190 TestStartStop/group/disable-driver-mounts 0.13
191 TestStartStop/group/embed-certs 0.14
192 TestInsufficientStorage 0
199 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:194: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:916: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1041: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1717: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1760: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1924: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1955: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:458: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:545: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:292: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2016: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.14s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.14s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard