Test Report: none_Linux 19690

                    
                      f8db61c9b74e1fc8d4208c01add19855c5953b45:2024-09-23:36339
                    
                

Test fail (1/166)

Order failed test Duration
33 TestAddons/parallel/Registry 72.01
x
+
TestAddons/parallel/Registry (72.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 2.329814ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-jmxwm" [93925dd9-5243-4f8b-a2b4-53a5b7b50bcb] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004390165s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-bxrn8" [88691c9c-8d3d-46c2-8169-7df6164800f7] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004190291s
addons_test.go:338: (dbg) Run:  kubectl --context minikube delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.095820431s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context minikube run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p minikube ip
2024/09/23 12:07:42 [DEBUG] GET http://10.128.15.239:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p minikube -n minikube
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 23 Sep 24 11:55 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 23 Sep 24 11:55 UTC | 23 Sep 24 11:55 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 11:55 UTC | 23 Sep 24 11:55 UTC |
	| start   | -o=json --download-only              | minikube | jenkins | v1.34.0 | 23 Sep 24 11:55 UTC |                     |
	|         | -p minikube --force                  |          |         |         |                     |                     |
	|         | --alsologtostderr                    |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |          |         |         |                     |                     |
	|         | --container-runtime=docker           |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | --all                                | minikube | jenkins | v1.34.0 | 23 Sep 24 11:55 UTC | 23 Sep 24 11:55 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 11:55 UTC | 23 Sep 24 11:55 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 11:55 UTC | 23 Sep 24 11:55 UTC |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 11:55 UTC | 23 Sep 24 11:55 UTC |
	| start   | --download-only -p                   | minikube | jenkins | v1.34.0 | 23 Sep 24 11:55 UTC |                     |
	|         | minikube --alsologtostderr           |          |         |         |                     |                     |
	|         | --binary-mirror                      |          |         |         |                     |                     |
	|         | http://127.0.0.1:43243               |          |         |         |                     |                     |
	|         | --driver=none                        |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 11:55 UTC | 23 Sep 24 11:55 UTC |
	| start   | -p minikube --alsologtostderr        | minikube | jenkins | v1.34.0 | 23 Sep 24 11:55 UTC | 23 Sep 24 11:56 UTC |
	|         | -v=1 --memory=2048                   |          |         |         |                     |                     |
	|         | --wait=true --driver=none            |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm               |          |         |         |                     |                     |
	| delete  | -p minikube                          | minikube | jenkins | v1.34.0 | 23 Sep 24 11:56 UTC | 23 Sep 24 11:56 UTC |
	| addons  | enable dashboard -p minikube         | minikube | jenkins | v1.34.0 | 23 Sep 24 11:56 UTC |                     |
	| addons  | disable dashboard -p minikube        | minikube | jenkins | v1.34.0 | 23 Sep 24 11:56 UTC |                     |
	| start   | -p minikube --wait=true              | minikube | jenkins | v1.34.0 | 23 Sep 24 11:56 UTC | 23 Sep 24 11:57 UTC |
	|         | --memory=4000 --alsologtostderr      |          |         |         |                     |                     |
	|         | --addons=registry                    |          |         |         |                     |                     |
	|         | --addons=metrics-server              |          |         |         |                     |                     |
	|         | --addons=volumesnapshots             |          |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |          |         |         |                     |                     |
	|         | --addons=gcp-auth                    |          |         |         |                     |                     |
	|         | --addons=cloud-spanner               |          |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |          |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |          |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |          |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |          |         |         |                     |                     |
	|         | --driver=none --bootstrapper=kubeadm |          |         |         |                     |                     |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 23 Sep 24 11:58 UTC | 23 Sep 24 11:58 UTC |
	|         | volcano --alsologtostderr -v=1       |          |         |         |                     |                     |
	| ip      | minikube ip                          | minikube | jenkins | v1.34.0 | 23 Sep 24 12:07 UTC | 23 Sep 24 12:07 UTC |
	| addons  | minikube addons disable              | minikube | jenkins | v1.34.0 | 23 Sep 24 12:07 UTC | 23 Sep 24 12:07 UTC |
	|         | registry --alsologtostderr           |          |         |         |                     |                     |
	|         | -v=1                                 |          |         |         |                     |                     |
	|---------|--------------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:56:10
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:56:10.600408 1781051 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:56:10.600523 1781051 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:56:10.600531 1781051 out.go:358] Setting ErrFile to fd 2...
	I0923 11:56:10.600535 1781051 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:56:10.600776 1781051 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-1770424/.minikube/bin
	I0923 11:56:10.601443 1781051 out.go:352] Setting JSON to false
	I0923 11:56:10.602434 1781051 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":149922,"bootTime":1726942649,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 11:56:10.602558 1781051 start.go:139] virtualization: kvm guest
	I0923 11:56:10.604806 1781051 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 11:56:10.606094 1781051 out.go:177]   - MINIKUBE_LOCATION=19690
	W0923 11:56:10.606145 1781051 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19690-1770424/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 11:56:10.606173 1781051 notify.go:220] Checking for updates...
	I0923 11:56:10.608381 1781051 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:56:10.609688 1781051 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-1770424/kubeconfig
	I0923 11:56:10.610804 1781051 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-1770424/.minikube
	I0923 11:56:10.611945 1781051 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 11:56:10.613224 1781051 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:56:10.614628 1781051 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:56:10.625868 1781051 out.go:177] * Using the none driver based on user configuration
	I0923 11:56:10.627083 1781051 start.go:297] selected driver: none
	I0923 11:56:10.627107 1781051 start.go:901] validating driver "none" against <nil>
	I0923 11:56:10.627121 1781051 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:56:10.627159 1781051 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0923 11:56:10.627480 1781051 out.go:270] ! The 'none' driver does not respect the --memory flag
	I0923 11:56:10.628117 1781051 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 11:56:10.628472 1781051 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:56:10.628516 1781051 cni.go:84] Creating CNI manager for ""
	I0923 11:56:10.628595 1781051 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 11:56:10.628608 1781051 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 11:56:10.628683 1781051 start.go:340] cluster config:
	{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:56:10.630213 1781051 out.go:177] * Starting "minikube" primary control-plane node in "minikube" cluster
	I0923 11:56:10.631570 1781051 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/config.json ...
	I0923 11:56:10.631608 1781051 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/config.json: {Name:mk13ac054d7f6503941d45916a7834ea301e025a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:56:10.631760 1781051 start.go:360] acquireMachinesLock for minikube: {Name:mkb770369218a9e8b7aecf89f12d595b5660e1ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0923 11:56:10.631790 1781051 start.go:364] duration metric: took 17.185µs to acquireMachinesLock for "minikube"
	I0923 11:56:10.631802 1781051 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 11:56:10.631895 1781051 start.go:125] createHost starting for "" (driver="none")
	I0923 11:56:10.633272 1781051 out.go:177] * Running on localhost (CPUs=8, Memory=32089MB, Disk=297540MB) ...
	I0923 11:56:10.634472 1781051 exec_runner.go:51] Run: systemctl --version
	I0923 11:56:10.636906 1781051 start.go:159] libmachine.API.Create for "minikube" (driver="none")
	I0923 11:56:10.636944 1781051 client.go:168] LocalClient.Create starting
	I0923 11:56:10.637014 1781051 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19690-1770424/.minikube/certs/ca.pem
	I0923 11:56:10.637043 1781051 main.go:141] libmachine: Decoding PEM data...
	I0923 11:56:10.637063 1781051 main.go:141] libmachine: Parsing certificate...
	I0923 11:56:10.637116 1781051 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19690-1770424/.minikube/certs/cert.pem
	I0923 11:56:10.637138 1781051 main.go:141] libmachine: Decoding PEM data...
	I0923 11:56:10.637156 1781051 main.go:141] libmachine: Parsing certificate...
	I0923 11:56:10.637515 1781051 client.go:171] duration metric: took 562.67µs to LocalClient.Create
	I0923 11:56:10.637537 1781051 start.go:167] duration metric: took 633.904µs to libmachine.API.Create "minikube"
	I0923 11:56:10.637543 1781051 start.go:293] postStartSetup for "minikube" (driver="none")
	I0923 11:56:10.637594 1781051 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 11:56:10.637653 1781051 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 11:56:10.648583 1781051 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 11:56:10.648605 1781051 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 11:56:10.648614 1781051 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 11:56:10.650449 1781051 out.go:177] * OS release is Ubuntu 20.04.6 LTS
	I0923 11:56:10.651555 1781051 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-1770424/.minikube/addons for local assets ...
	I0923 11:56:10.651607 1781051 filesync.go:126] Scanning /home/jenkins/minikube-integration/19690-1770424/.minikube/files for local assets ...
	I0923 11:56:10.651629 1781051 start.go:296] duration metric: took 14.077818ms for postStartSetup
	I0923 11:56:10.652286 1781051 profile.go:143] Saving config to /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/config.json ...
	I0923 11:56:10.652424 1781051 start.go:128] duration metric: took 20.519088ms to createHost
	I0923 11:56:10.652439 1781051 start.go:83] releasing machines lock for "minikube", held for 20.640822ms
	I0923 11:56:10.652843 1781051 exec_runner.go:51] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 11:56:10.652897 1781051 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
	W0923 11:56:10.654928 1781051 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0923 11:56:10.654989 1781051 exec_runner.go:51] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 11:56:10.664369 1781051 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0923 11:56:10.664395 1781051 start.go:495] detecting cgroup driver to use...
	I0923 11:56:10.664443 1781051 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 11:56:10.664587 1781051 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:56:10.689079 1781051 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 11:56:10.700103 1781051 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 11:56:10.711485 1781051 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 11:56:10.711559 1781051 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 11:56:10.724884 1781051 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:56:10.735382 1781051 exec_runner.go:51] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 11:56:10.746883 1781051 exec_runner.go:51] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 11:56:10.757685 1781051 exec_runner.go:51] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 11:56:10.766629 1781051 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 11:56:10.778347 1781051 exec_runner.go:51] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 11:56:10.788705 1781051 exec_runner.go:51] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 11:56:10.799282 1781051 exec_runner.go:51] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 11:56:10.808267 1781051 exec_runner.go:51] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 11:56:10.817970 1781051 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 11:56:11.046999 1781051 exec_runner.go:51] Run: sudo systemctl restart containerd
	I0923 11:56:11.116687 1781051 start.go:495] detecting cgroup driver to use...
	I0923 11:56:11.116762 1781051 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 11:56:11.116933 1781051 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 11:56:11.138983 1781051 exec_runner.go:51] Run: which cri-dockerd
	I0923 11:56:11.139978 1781051 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 11:56:11.149462 1781051 exec_runner.go:144] found /etc/systemd/system/cri-docker.service.d/10-cni.conf, removing ...
	I0923 11:56:11.149514 1781051 exec_runner.go:203] rm: /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0923 11:56:11.149558 1781051 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0923 11:56:11.159569 1781051 exec_runner.go:151] cp: memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 11:56:11.159763 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3387344310 /etc/systemd/system/cri-docker.service.d/10-cni.conf
	I0923 11:56:11.168622 1781051 exec_runner.go:51] Run: sudo systemctl unmask docker.service
	I0923 11:56:11.401034 1781051 exec_runner.go:51] Run: sudo systemctl enable docker.socket
	I0923 11:56:11.625577 1781051 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 11:56:11.625780 1781051 exec_runner.go:144] found /etc/docker/daemon.json, removing ...
	I0923 11:56:11.625798 1781051 exec_runner.go:203] rm: /etc/docker/daemon.json
	I0923 11:56:11.625857 1781051 exec_runner.go:51] Run: sudo rm -f /etc/docker/daemon.json
	I0923 11:56:11.635635 1781051 exec_runner.go:151] cp: memory --> /etc/docker/daemon.json (130 bytes)
	I0923 11:56:11.635838 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3470512156 /etc/docker/daemon.json
	I0923 11:56:11.645316 1781051 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 11:56:11.880307 1781051 exec_runner.go:51] Run: sudo systemctl restart docker
	I0923 11:56:12.200610 1781051 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 11:56:12.212167 1781051 exec_runner.go:51] Run: sudo systemctl stop cri-docker.socket
	I0923 11:56:12.231865 1781051 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 11:56:12.243300 1781051 exec_runner.go:51] Run: sudo systemctl unmask cri-docker.socket
	I0923 11:56:12.467645 1781051 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
	I0923 11:56:12.694309 1781051 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 11:56:12.925884 1781051 exec_runner.go:51] Run: sudo systemctl restart cri-docker.socket
	I0923 11:56:12.943571 1781051 exec_runner.go:51] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 11:56:12.954476 1781051 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 11:56:13.179124 1781051 exec_runner.go:51] Run: sudo systemctl restart cri-docker.service
	I0923 11:56:13.258258 1781051 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 11:56:13.258346 1781051 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
	I0923 11:56:13.260006 1781051 start.go:563] Will wait 60s for crictl version
	I0923 11:56:13.260065 1781051 exec_runner.go:51] Run: which crictl
	I0923 11:56:13.261039 1781051 exec_runner.go:51] Run: sudo /usr/local/bin/crictl version
	I0923 11:56:13.296344 1781051 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0923 11:56:13.296416 1781051 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0923 11:56:13.319785 1781051 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0923 11:56:13.346738 1781051 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0923 11:56:13.346832 1781051 exec_runner.go:51] Run: grep 127.0.0.1	host.minikube.internal$ /etc/hosts
	I0923 11:56:13.349837 1781051 out.go:177]   - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
	I0923 11:56:13.351244 1781051 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.239 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 11:56:13.351396 1781051 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 11:56:13.351413 1781051 kubeadm.go:934] updating node { 10.128.15.239 8443 v1.31.1 docker true true} ...
	I0923 11:56:13.351517 1781051 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ubuntu-20-agent-12 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.128.15.239 --resolv-conf=/run/systemd/resolve/resolv.conf
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:}
	I0923 11:56:13.351570 1781051 exec_runner.go:51] Run: docker info --format {{.CgroupDriver}}
	I0923 11:56:13.402847 1781051 cni.go:84] Creating CNI manager for ""
	I0923 11:56:13.402876 1781051 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 11:56:13.402895 1781051 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 11:56:13.402927 1781051 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.128.15.239 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu-20-agent-12 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.128.15.239"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.128.15.239 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 11:56:13.403093 1781051 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.128.15.239
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ubuntu-20-agent-12"
	  kubeletExtraArgs:
	    node-ip: 10.128.15.239
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.128.15.239"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 11:56:13.403176 1781051 exec_runner.go:51] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 11:56:13.412806 1781051 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0923 11:56:13.412865 1781051 exec_runner.go:51] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0923 11:56:13.421656 1781051 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0923 11:56:13.421712 1781051 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0923 11:56:13.421757 1781051 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0923 11:56:13.421817 1781051 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0923 11:56:13.421836 1781051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-1770424/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0923 11:56:13.421862 1781051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-1770424/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0923 11:56:13.433540 1781051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-1770424/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0923 11:56:13.475999 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4209527035 /var/lib/minikube/binaries/v1.31.1/kubectl
	I0923 11:56:13.497109 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1583362620 /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0923 11:56:13.530404 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2652808776 /var/lib/minikube/binaries/v1.31.1/kubelet
	I0923 11:56:13.604275 1781051 exec_runner.go:51] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 11:56:13.613096 1781051 exec_runner.go:144] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ...
	I0923 11:56:13.613122 1781051 exec_runner.go:203] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0923 11:56:13.613166 1781051 exec_runner.go:51] Run: sudo rm -f /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0923 11:56:13.620719 1781051 exec_runner.go:151] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I0923 11:56:13.620917 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube290077719 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
	I0923 11:56:13.629109 1781051 exec_runner.go:144] found /lib/systemd/system/kubelet.service, removing ...
	I0923 11:56:13.629133 1781051 exec_runner.go:203] rm: /lib/systemd/system/kubelet.service
	I0923 11:56:13.629178 1781051 exec_runner.go:51] Run: sudo rm -f /lib/systemd/system/kubelet.service
	I0923 11:56:13.637734 1781051 exec_runner.go:151] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 11:56:13.637894 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3023528995 /lib/systemd/system/kubelet.service
	I0923 11:56:13.646954 1781051 exec_runner.go:151] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0923 11:56:13.647096 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1303979908 /var/tmp/minikube/kubeadm.yaml.new
	I0923 11:56:13.655408 1781051 exec_runner.go:51] Run: grep 10.128.15.239	control-plane.minikube.internal$ /etc/hosts
	I0923 11:56:13.656717 1781051 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 11:56:13.886382 1781051 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0923 11:56:13.901954 1781051 certs.go:68] Setting up /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube for IP: 10.128.15.239
	I0923 11:56:13.901982 1781051 certs.go:194] generating shared ca certs ...
	I0923 11:56:13.902008 1781051 certs.go:226] acquiring lock for ca certs: {Name:mk97e24c5ddee384459fe85d80804e5d50948023 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:56:13.902186 1781051 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19690-1770424/.minikube/ca.key
	I0923 11:56:13.902249 1781051 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19690-1770424/.minikube/proxy-client-ca.key
	I0923 11:56:13.902264 1781051 certs.go:256] generating profile certs ...
	I0923 11:56:13.902340 1781051 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/client.key
	I0923 11:56:13.902360 1781051 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/client.crt with IP's: []
	I0923 11:56:13.947334 1781051 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/client.crt ...
	I0923 11:56:13.947364 1781051 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/client.crt: {Name:mk5bc1d2cfaf59e05f2d6a3133920d4d372921d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:56:13.947499 1781051 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/client.key ...
	I0923 11:56:13.947510 1781051 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/client.key: {Name:mkd678359ec057fc40140d58a292a3894c5c1e8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:56:13.947576 1781051 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/apiserver.key.ed77be83
	I0923 11:56:13.947591 1781051 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/apiserver.crt.ed77be83 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.128.15.239]
	I0923 11:56:14.013072 1781051 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/apiserver.crt.ed77be83 ...
	I0923 11:56:14.013110 1781051 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/apiserver.crt.ed77be83: {Name:mkc034bf1d3a043f5660f1fdae0faaaffb90e8f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:56:14.013250 1781051 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/apiserver.key.ed77be83 ...
	I0923 11:56:14.013262 1781051 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/apiserver.key.ed77be83: {Name:mkea40c1c73c8892335b232e8c3fa4fab21a552c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:56:14.013318 1781051 certs.go:381] copying /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/apiserver.crt.ed77be83 -> /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/apiserver.crt
	I0923 11:56:14.013425 1781051 certs.go:385] copying /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/apiserver.key.ed77be83 -> /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/apiserver.key
	I0923 11:56:14.013486 1781051 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/proxy-client.key
	I0923 11:56:14.013502 1781051 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/proxy-client.crt with IP's: []
	I0923 11:56:14.290306 1781051 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/proxy-client.crt ...
	I0923 11:56:14.290342 1781051 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/proxy-client.crt: {Name:mk55bb0ac0bde76cc138cf47dffd9c807a719164 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:56:14.290488 1781051 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/proxy-client.key ...
	I0923 11:56:14.290499 1781051 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/proxy-client.key: {Name:mk49c113cd9b21311b902405644efe6e81c684c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:56:14.290659 1781051 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-1770424/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 11:56:14.290694 1781051 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-1770424/.minikube/certs/ca.pem (1078 bytes)
	I0923 11:56:14.290724 1781051 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-1770424/.minikube/certs/cert.pem (1123 bytes)
	I0923 11:56:14.290746 1781051 certs.go:484] found cert: /home/jenkins/minikube-integration/19690-1770424/.minikube/certs/key.pem (1675 bytes)
	I0923 11:56:14.291410 1781051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-1770424/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 11:56:14.291530 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2383311950 /var/lib/minikube/certs/ca.crt
	I0923 11:56:14.300596 1781051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-1770424/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 11:56:14.300797 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3323146708 /var/lib/minikube/certs/ca.key
	I0923 11:56:14.309879 1781051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-1770424/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 11:56:14.310051 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube764936559 /var/lib/minikube/certs/proxy-client-ca.crt
	I0923 11:56:14.319906 1781051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-1770424/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 11:56:14.320046 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1074678528 /var/lib/minikube/certs/proxy-client-ca.key
	I0923 11:56:14.328648 1781051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes)
	I0923 11:56:14.328805 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube703203534 /var/lib/minikube/certs/apiserver.crt
	I0923 11:56:14.337825 1781051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 11:56:14.337979 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4229467897 /var/lib/minikube/certs/apiserver.key
	I0923 11:56:14.347814 1781051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 11:56:14.347949 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2251826424 /var/lib/minikube/certs/proxy-client.crt
	I0923 11:56:14.358084 1781051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-1770424/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0923 11:56:14.358309 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube983820934 /var/lib/minikube/certs/proxy-client.key
	I0923 11:56:14.367598 1781051 exec_runner.go:144] found /usr/share/ca-certificates/minikubeCA.pem, removing ...
	I0923 11:56:14.367624 1781051 exec_runner.go:203] rm: /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:56:14.367678 1781051 exec_runner.go:51] Run: sudo rm -f /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:56:14.376303 1781051 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19690-1770424/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 11:56:14.376479 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3037108097 /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:56:14.385577 1781051 exec_runner.go:151] cp: memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 11:56:14.385749 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3172281193 /var/lib/minikube/kubeconfig
	I0923 11:56:14.394640 1781051 exec_runner.go:51] Run: openssl version
	I0923 11:56:14.397789 1781051 exec_runner.go:51] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 11:56:14.407077 1781051 exec_runner.go:51] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:56:14.408420 1781051 certs.go:528] hashing: -rw-r--r-- 1 jenkins jenkins 1111 Sep 23 11:56 /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:56:14.408464 1781051 exec_runner.go:51] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 11:56:14.411476 1781051 exec_runner.go:51] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 11:56:14.419588 1781051 exec_runner.go:51] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 11:56:14.421045 1781051 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: exit status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 11:56:14.421087 1781051 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.239 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 11:56:14.421200 1781051 exec_runner.go:51] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 11:56:14.437387 1781051 exec_runner.go:51] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 11:56:14.447154 1781051 exec_runner.go:51] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 11:56:14.455485 1781051 exec_runner.go:51] Run: docker version --format {{.Server.Version}}
	I0923 11:56:14.478071 1781051 exec_runner.go:51] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 11:56:14.487709 1781051 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 11:56:14.487737 1781051 kubeadm.go:157] found existing configuration files:
	
	I0923 11:56:14.487784 1781051 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 11:56:14.497815 1781051 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 11:56:14.497888 1781051 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 11:56:14.505682 1781051 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 11:56:14.515145 1781051 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 11:56:14.515224 1781051 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 11:56:14.524696 1781051 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 11:56:14.532925 1781051 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 11:56:14.532994 1781051 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 11:56:14.540855 1781051 exec_runner.go:51] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 11:56:14.549419 1781051 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 11:56:14.549482 1781051 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 11:56:14.558816 1781051 exec_runner.go:97] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0923 11:56:14.594480 1781051 kubeadm.go:310] W0923 11:56:14.594340 1781925 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:56:14.594957 1781051 kubeadm.go:310] W0923 11:56:14.594915 1781925 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 11:56:14.596663 1781051 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 11:56:14.596708 1781051 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 11:56:14.703208 1781051 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 11:56:14.703321 1781051 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 11:56:14.703333 1781051 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 11:56:14.703338 1781051 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 11:56:14.715545 1781051 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 11:56:14.718620 1781051 out.go:235]   - Generating certificates and keys ...
	I0923 11:56:14.718667 1781051 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 11:56:14.718678 1781051 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 11:56:14.894251 1781051 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 11:56:14.961993 1781051 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 11:56:15.173738 1781051 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 11:56:15.234595 1781051 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 11:56:15.322294 1781051 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 11:56:15.322360 1781051 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost ubuntu-20-agent-12] and IPs [10.128.15.239 127.0.0.1 ::1]
	I0923 11:56:15.580183 1781051 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 11:56:15.580285 1781051 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu-20-agent-12] and IPs [10.128.15.239 127.0.0.1 ::1]
	I0923 11:56:15.746315 1781051 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 11:56:16.014051 1781051 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 11:56:16.064189 1781051 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 11:56:16.064330 1781051 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 11:56:16.305350 1781051 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 11:56:16.436309 1781051 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 11:56:16.523092 1781051 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 11:56:16.611876 1781051 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 11:56:16.803182 1781051 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 11:56:16.803680 1781051 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 11:56:16.806057 1781051 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 11:56:16.807874 1781051 out.go:235]   - Booting up control plane ...
	I0923 11:56:16.807905 1781051 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 11:56:16.807925 1781051 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 11:56:16.808501 1781051 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 11:56:16.827186 1781051 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 11:56:16.831570 1781051 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 11:56:16.831631 1781051 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 11:56:17.076563 1781051 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 11:56:17.076590 1781051 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 11:56:18.078612 1781051 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001983978s
	I0923 11:56:18.078731 1781051 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 11:56:22.579988 1781051 kubeadm.go:310] [api-check] The API server is healthy after 4.501388432s
	I0923 11:56:22.592350 1781051 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 11:56:22.604819 1781051 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 11:56:22.624624 1781051 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 11:56:22.624653 1781051 kubeadm.go:310] [mark-control-plane] Marking the node ubuntu-20-agent-12 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 11:56:22.633464 1781051 kubeadm.go:310] [bootstrap-token] Using token: 5lwmbp.5z8s1wwdz3e34z5z
	I0923 11:56:22.634856 1781051 out.go:235]   - Configuring RBAC rules ...
	I0923 11:56:22.634895 1781051 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 11:56:22.639235 1781051 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 11:56:22.646586 1781051 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 11:56:22.649719 1781051 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 11:56:22.653151 1781051 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 11:56:22.656225 1781051 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 11:56:22.985802 1781051 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 11:56:23.406071 1781051 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 11:56:23.985610 1781051 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 11:56:23.986430 1781051 kubeadm.go:310] 
	I0923 11:56:23.986441 1781051 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 11:56:23.986444 1781051 kubeadm.go:310] 
	I0923 11:56:23.986459 1781051 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 11:56:23.986463 1781051 kubeadm.go:310] 
	I0923 11:56:23.986468 1781051 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 11:56:23.986482 1781051 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 11:56:23.986485 1781051 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 11:56:23.986489 1781051 kubeadm.go:310] 
	I0923 11:56:23.986492 1781051 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 11:56:23.986495 1781051 kubeadm.go:310] 
	I0923 11:56:23.986531 1781051 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 11:56:23.986535 1781051 kubeadm.go:310] 
	I0923 11:56:23.986539 1781051 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 11:56:23.986543 1781051 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 11:56:23.986547 1781051 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 11:56:23.986551 1781051 kubeadm.go:310] 
	I0923 11:56:23.986555 1781051 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 11:56:23.986560 1781051 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 11:56:23.986564 1781051 kubeadm.go:310] 
	I0923 11:56:23.986567 1781051 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5lwmbp.5z8s1wwdz3e34z5z \
	I0923 11:56:23.986580 1781051 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:52578282516a876214cc87b5e35255cefff0a7abe4470144fe112ac26ec989b7 \
	I0923 11:56:23.986584 1781051 kubeadm.go:310] 	--control-plane 
	I0923 11:56:23.986588 1781051 kubeadm.go:310] 
	I0923 11:56:23.986592 1781051 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 11:56:23.986595 1781051 kubeadm.go:310] 
	I0923 11:56:23.986599 1781051 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5lwmbp.5z8s1wwdz3e34z5z \
	I0923 11:56:23.986603 1781051 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:52578282516a876214cc87b5e35255cefff0a7abe4470144fe112ac26ec989b7 
	I0923 11:56:23.990596 1781051 cni.go:84] Creating CNI manager for ""
	I0923 11:56:23.990626 1781051 cni.go:158] "none" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 11:56:23.992638 1781051 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 11:56:23.993895 1781051 exec_runner.go:51] Run: sudo mkdir -p /etc/cni/net.d
	I0923 11:56:24.005873 1781051 exec_runner.go:151] cp: memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 11:56:24.006046 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2293539169 /etc/cni/net.d/1-k8s.conflist
	I0923 11:56:24.016999 1781051 exec_runner.go:51] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 11:56:24.017054 1781051 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:56:24.017149 1781051 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ubuntu-20-agent-12 minikube.k8s.io/updated_at=2024_09_23T11_56_24_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true
	I0923 11:56:24.027434 1781051 ops.go:34] apiserver oom_adj: -16
	I0923 11:56:24.091685 1781051 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:56:24.592489 1781051 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:56:25.092188 1781051 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:56:25.592384 1781051 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:56:26.092544 1781051 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:56:26.591916 1781051 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:56:27.092436 1781051 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:56:27.594684 1781051 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:56:28.092096 1781051 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:56:28.591925 1781051 exec_runner.go:51] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 11:56:28.679382 1781051 kubeadm.go:1113] duration metric: took 4.662395085s to wait for elevateKubeSystemPrivileges
	I0923 11:56:28.679436 1781051 kubeadm.go:394] duration metric: took 14.258352106s to StartCluster
	I0923 11:56:28.679463 1781051 settings.go:142] acquiring lock: {Name:mk47875e0ee2062688dc3277037a9505c687d682 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:56:28.679543 1781051 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19690-1770424/kubeconfig
	I0923 11:56:28.680498 1781051 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19690-1770424/kubeconfig: {Name:mk2ac10a07464df1a71c2b2a09e37115b9d155ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 11:56:28.680788 1781051 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 11:56:28.680851 1781051 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 11:56:28.681027 1781051 addons.go:69] Setting yakd=true in profile "minikube"
	I0923 11:56:28.681040 1781051 addons.go:69] Setting nvidia-device-plugin=true in profile "minikube"
	I0923 11:56:28.681051 1781051 addons.go:234] Setting addon yakd=true in "minikube"
	I0923 11:56:28.681073 1781051 addons.go:234] Setting addon nvidia-device-plugin=true in "minikube"
	I0923 11:56:28.681085 1781051 addons.go:69] Setting storage-provisioner-rancher=true in profile "minikube"
	I0923 11:56:28.681080 1781051 addons.go:69] Setting gcp-auth=true in profile "minikube"
	I0923 11:56:28.681038 1781051 addons.go:69] Setting cloud-spanner=true in profile "minikube"
	I0923 11:56:28.681124 1781051 addons.go:69] Setting volcano=true in profile "minikube"
	I0923 11:56:28.681134 1781051 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:56:28.681168 1781051 addons.go:234] Setting addon cloud-spanner=true in "minikube"
	I0923 11:56:28.681183 1781051 addons.go:69] Setting inspektor-gadget=true in profile "minikube"
	I0923 11:56:28.681194 1781051 addons.go:234] Setting addon inspektor-gadget=true in "minikube"
	I0923 11:56:28.681195 1781051 host.go:66] Checking if "minikube" exists ...
	I0923 11:56:28.681213 1781051 addons.go:69] Setting volumesnapshots=true in profile "minikube"
	I0923 11:56:28.681233 1781051 host.go:66] Checking if "minikube" exists ...
	I0923 11:56:28.681237 1781051 addons.go:234] Setting addon volumesnapshots=true in "minikube"
	I0923 11:56:28.681276 1781051 host.go:66] Checking if "minikube" exists ...
	I0923 11:56:28.681486 1781051 addons.go:234] Setting addon volcano=true in "minikube"
	I0923 11:56:28.681523 1781051 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "minikube"
	I0923 11:56:28.681551 1781051 addons.go:69] Setting csi-hostpath-driver=true in profile "minikube"
	I0923 11:56:28.681566 1781051 mustload.go:65] Loading cluster: minikube
	I0923 11:56:28.681616 1781051 addons.go:234] Setting addon csi-hostpath-driver=true in "minikube"
	I0923 11:56:28.681636 1781051 host.go:66] Checking if "minikube" exists ...
	I0923 11:56:28.681837 1781051 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 11:56:28.682163 1781051 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 11:56:28.682218 1781051 api_server.go:166] Checking apiserver status ...
	I0923 11:56:28.681116 1781051 host.go:66] Checking if "minikube" exists ...
	I0923 11:56:28.682725 1781051 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 11:56:28.682867 1781051 api_server.go:166] Checking apiserver status ...
	I0923 11:56:28.682875 1781051 addons.go:69] Setting storage-provisioner=true in profile "minikube"
	I0923 11:56:28.682891 1781051 addons.go:234] Setting addon storage-provisioner=true in "minikube"
	I0923 11:56:28.682900 1781051 out.go:177] * Configuring local host environment ...
	I0923 11:56:28.682961 1781051 addons.go:69] Setting metrics-server=true in profile "minikube"
	I0923 11:56:28.682749 1781051 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 11:56:28.682977 1781051 api_server.go:166] Checking apiserver status ...
	I0923 11:56:28.682981 1781051 addons.go:234] Setting addon metrics-server=true in "minikube"
	I0923 11:56:28.683013 1781051 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:56:28.683019 1781051 host.go:66] Checking if "minikube" exists ...
	I0923 11:56:28.681539 1781051 host.go:66] Checking if "minikube" exists ...
	I0923 11:56:28.683250 1781051 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:56:28.682919 1781051 host.go:66] Checking if "minikube" exists ...
	I0923 11:56:28.683879 1781051 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 11:56:28.683895 1781051 api_server.go:166] Checking apiserver status ...
	I0923 11:56:28.683938 1781051 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:56:28.684311 1781051 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 11:56:28.684327 1781051 api_server.go:166] Checking apiserver status ...
	I0923 11:56:28.684361 1781051 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:56:28.684549 1781051 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 11:56:28.684571 1781051 api_server.go:166] Checking apiserver status ...
	I0923 11:56:28.684605 1781051 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:56:28.684787 1781051 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 11:56:28.684814 1781051 api_server.go:166] Checking apiserver status ...
	I0923 11:56:28.684864 1781051 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:56:28.682966 1781051 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0923 11:56:28.685771 1781051 out.go:270] * 
	W0923 11:56:28.685857 1781051 out.go:270] ! The 'none' driver is designed for experts who need to integrate with an existing VM
	W0923 11:56:28.686026 1781051 out.go:270] * Most users should use the newer 'docker' driver instead, which does not require root!
	W0923 11:56:28.686045 1781051 out.go:270] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/
	W0923 11:56:28.686051 1781051 out.go:270] * 
	W0923 11:56:28.686118 1781051 out.go:270] ! kubectl and minikube configuration will be stored in /home/jenkins
	W0923 11:56:28.686127 1781051 out.go:270] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
	W0923 11:56:28.686147 1781051 out.go:270] * 
	W0923 11:56:28.686175 1781051 out.go:270]   - sudo mv /home/jenkins/.kube /home/jenkins/.minikube $HOME
	W0923 11:56:28.686188 1781051 out.go:270]   - sudo chown -R $USER $HOME/.kube $HOME/.minikube
	W0923 11:56:28.686194 1781051 out.go:270] * 
	W0923 11:56:28.686201 1781051 out.go:270] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
	I0923 11:56:28.686233 1781051 start.go:235] Will wait 6m0s for node &{Name: IP:10.128.15.239 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 11:56:28.686783 1781051 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 11:56:28.682828 1781051 addons.go:69] Setting default-storageclass=true in profile "minikube"
	I0923 11:56:28.686820 1781051 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
	I0923 11:56:28.687025 1781051 api_server.go:166] Checking apiserver status ...
	I0923 11:56:28.687089 1781051 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:56:28.681120 1781051 host.go:66] Checking if "minikube" exists ...
	I0923 11:56:28.682839 1781051 addons.go:69] Setting registry=true in profile "minikube"
	I0923 11:56:28.687446 1781051 addons.go:234] Setting addon registry=true in "minikube"
	I0923 11:56:28.687487 1781051 host.go:66] Checking if "minikube" exists ...
	I0923 11:56:28.688539 1781051 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 11:56:28.688562 1781051 api_server.go:166] Checking apiserver status ...
	I0923 11:56:28.688604 1781051 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:56:28.688711 1781051 out.go:177] * Verifying Kubernetes components...
	I0923 11:56:28.692801 1781051 exec_runner.go:51] Run: sudo systemctl daemon-reload
	I0923 11:56:28.710563 1781051 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1782358/cgroup
	I0923 11:56:28.718035 1781051 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 11:56:28.718065 1781051 api_server.go:166] Checking apiserver status ...
	I0923 11:56:28.718085 1781051 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 11:56:28.718106 1781051 api_server.go:166] Checking apiserver status ...
	I0923 11:56:28.718034 1781051 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 11:56:28.718114 1781051 api_server.go:166] Checking apiserver status ...
	I0923 11:56:28.718128 1781051 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:56:28.718147 1781051 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:56:28.718106 1781051 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:56:28.737225 1781051 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 11:56:28.737259 1781051 api_server.go:166] Checking apiserver status ...
	I0923 11:56:28.737309 1781051 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:56:28.737852 1781051 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1782358/cgroup
	I0923 11:56:28.737947 1781051 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1782358/cgroup
	I0923 11:56:28.738475 1781051 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1782358/cgroup
	I0923 11:56:28.740346 1781051 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1782358/cgroup
	I0923 11:56:28.740615 1781051 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1782358/cgroup
	I0923 11:56:28.741334 1781051 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96"
	I0923 11:56:28.741402 1781051 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96/freezer.state
	I0923 11:56:28.742741 1781051 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1782358/cgroup
	I0923 11:56:28.742742 1781051 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1782358/cgroup
	I0923 11:56:28.746145 1781051 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1782358/cgroup
	I0923 11:56:28.748996 1781051 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1782358/cgroup
	I0923 11:56:28.755973 1781051 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96"
	I0923 11:56:28.756055 1781051 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96/freezer.state
	I0923 11:56:28.757127 1781051 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1782358/cgroup
	I0923 11:56:28.758794 1781051 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96"
	I0923 11:56:28.758924 1781051 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96/freezer.state
	I0923 11:56:28.764911 1781051 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96"
	I0923 11:56:28.765010 1781051 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96/freezer.state
	I0923 11:56:28.770383 1781051 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1782358/cgroup
	I0923 11:56:28.775707 1781051 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96"
	I0923 11:56:28.775782 1781051 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96/freezer.state
	I0923 11:56:28.778387 1781051 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96"
	I0923 11:56:28.778525 1781051 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96/freezer.state
	I0923 11:56:28.780602 1781051 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1782358/cgroup
	I0923 11:56:28.781904 1781051 api_server.go:204] freezer state: "THAWED"
	I0923 11:56:28.781939 1781051 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 11:56:28.788098 1781051 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96"
	I0923 11:56:28.788177 1781051 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96/freezer.state
	I0923 11:56:28.788192 1781051 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96"
	I0923 11:56:28.788260 1781051 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96/freezer.state
	I0923 11:56:28.790558 1781051 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 11:56:28.791647 1781051 api_server.go:204] freezer state: "THAWED"
	I0923 11:56:28.791731 1781051 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 11:56:28.793230 1781051 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 11:56:28.795417 1781051 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 11:56:28.797003 1781051 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 11:56:28.798931 1781051 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 11:56:28.800761 1781051 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 11:56:28.801980 1781051 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96"
	I0923 11:56:28.802056 1781051 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96/freezer.state
	I0923 11:56:28.802412 1781051 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 11:56:28.802473 1781051 exec_runner.go:151] cp: metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 11:56:28.803009 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2960444605 /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 11:56:28.804464 1781051 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 11:56:28.806755 1781051 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 11:56:28.806753 1781051 api_server.go:204] freezer state: "THAWED"
	I0923 11:56:28.806921 1781051 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 11:56:28.816010 1781051 api_server.go:204] freezer state: "THAWED"
	I0923 11:56:28.816041 1781051 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 11:56:28.816935 1781051 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 11:56:28.818448 1781051 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 11:56:28.821261 1781051 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 11:56:28.824894 1781051 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 11:56:28.826315 1781051 addons.go:234] Setting addon storage-provisioner-rancher=true in "minikube"
	I0923 11:56:28.826368 1781051 host.go:66] Checking if "minikube" exists ...
	I0923 11:56:28.827017 1781051 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 11:56:28.827042 1781051 api_server.go:166] Checking apiserver status ...
	I0923 11:56:28.827083 1781051 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:56:28.827347 1781051 api_server.go:204] freezer state: "THAWED"
	I0923 11:56:28.827372 1781051 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 11:56:28.828208 1781051 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 11:56:28.830484 1781051 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 11:56:28.830529 1781051 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 11:56:28.830680 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2102130802 /etc/kubernetes/addons/registry-rc.yaml
	I0923 11:56:28.830758 1781051 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 11:56:28.831824 1781051 api_server.go:204] freezer state: "THAWED"
	I0923 11:56:28.831849 1781051 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 11:56:28.832957 1781051 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96"
	I0923 11:56:28.833024 1781051 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96/freezer.state
	I0923 11:56:28.833627 1781051 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 11:56:28.834603 1781051 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 11:56:28.835294 1781051 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 11:56:28.835332 1781051 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 11:56:28.835560 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3759299334 /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 11:56:28.836476 1781051 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 11:56:28.836653 1781051 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96"
	I0923 11:56:28.836709 1781051 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96/freezer.state
	I0923 11:56:28.837981 1781051 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 11:56:28.838023 1781051 exec_runner.go:151] cp: yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 11:56:28.838181 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2779832049 /etc/kubernetes/addons/yakd-ns.yaml
	I0923 11:56:28.839226 1781051 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 11:56:28.842300 1781051 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 11:56:28.844336 1781051 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:56:28.844366 1781051 exec_runner.go:144] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ...
	I0923 11:56:28.844374 1781051 exec_runner.go:203] rm: /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:56:28.844433 1781051 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:56:28.844973 1781051 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96"
	I0923 11:56:28.845039 1781051 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96/freezer.state
	I0923 11:56:28.847953 1781051 api_server.go:204] freezer state: "THAWED"
	I0923 11:56:28.847984 1781051 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 11:56:28.851906 1781051 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 11:56:28.851941 1781051 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 11:56:28.852085 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube301789806 /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 11:56:28.853282 1781051 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 11:56:28.853316 1781051 host.go:66] Checking if "minikube" exists ...
	I0923 11:56:28.855291 1781051 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 11:56:28.855324 1781051 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 11:56:28.855455 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2188087717 /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 11:56:28.858471 1781051 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96"
	I0923 11:56:28.858535 1781051 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96/freezer.state
	I0923 11:56:28.860875 1781051 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 11:56:28.860906 1781051 exec_runner.go:151] cp: yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 11:56:28.861025 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3104840562 /etc/kubernetes/addons/yakd-sa.yaml
	I0923 11:56:28.861798 1781051 api_server.go:204] freezer state: "THAWED"
	I0923 11:56:28.861822 1781051 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 11:56:28.865769 1781051 api_server.go:204] freezer state: "THAWED"
	I0923 11:56:28.865918 1781051 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 11:56:28.870393 1781051 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 11:56:28.870443 1781051 exec_runner.go:151] cp: registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 11:56:28.870607 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2438954397 /etc/kubernetes/addons/registry-svc.yaml
	I0923 11:56:28.873250 1781051 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 11:56:28.873303 1781051 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 11:56:28.875245 1781051 api_server.go:204] freezer state: "THAWED"
	I0923 11:56:28.875283 1781051 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 11:56:28.875750 1781051 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 11:56:28.875805 1781051 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 11:56:28.876063 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube475501217 /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 11:56:28.876421 1781051 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0923 11:56:28.876414 1781051 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 11:56:28.879181 1781051 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 11:56:28.879215 1781051 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 11:56:28.879346 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1121111134 /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 11:56:28.879570 1781051 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 11:56:28.879592 1781051 exec_runner.go:151] cp: yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 11:56:28.879702 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3337982138 /etc/kubernetes/addons/yakd-crb.yaml
	I0923 11:56:28.879834 1781051 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0923 11:56:28.880013 1781051 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 11:56:28.882629 1781051 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 11:56:28.882681 1781051 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0923 11:56:28.884311 1781051 api_server.go:204] freezer state: "THAWED"
	I0923 11:56:28.884339 1781051 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 11:56:28.884715 1781051 api_server.go:204] freezer state: "THAWED"
	I0923 11:56:28.884857 1781051 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 11:56:28.884882 1781051 exec_runner.go:151] cp: inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 11:56:28.885043 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1510665932 /etc/kubernetes/addons/ig-namespace.yaml
	I0923 11:56:28.885266 1781051 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 11:56:28.885524 1781051 api_server.go:204] freezer state: "THAWED"
	I0923 11:56:28.885542 1781051 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 11:56:28.888275 1781051 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 11:56:28.888312 1781051 exec_runner.go:151] cp: metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 11:56:28.888546 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube879883501 /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 11:56:28.889328 1781051 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 11:56:28.889367 1781051 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0923 11:56:28.889922 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3656609227 /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 11:56:28.890967 1781051 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 11:56:28.891469 1781051 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 11:56:28.892879 1781051 addons.go:234] Setting addon default-storageclass=true in "minikube"
	I0923 11:56:28.892928 1781051 host.go:66] Checking if "minikube" exists ...
	I0923 11:56:28.893160 1781051 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 11:56:28.897075 1781051 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1782358/cgroup
	I0923 11:56:28.893739 1781051 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 11:56:28.897371 1781051 api_server.go:166] Checking apiserver status ...
	I0923 11:56:28.897430 1781051 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:56:28.895423 1781051 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 11:56:28.897849 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3345122890 /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:56:28.898022 1781051 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 11:56:28.898583 1781051 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 11:56:28.900018 1781051 exec_runner.go:51] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           127.0.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 11:56:28.901148 1781051 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 11:56:28.901181 1781051 exec_runner.go:151] cp: volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 11:56:28.901319 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube737386867 /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 11:56:28.902519 1781051 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 11:56:28.902552 1781051 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 11:56:28.902677 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube856590260 /etc/kubernetes/addons/registry-proxy.yaml
	I0923 11:56:28.907389 1781051 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 11:56:28.908509 1781051 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 11:56:28.908539 1781051 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 11:56:28.908545 1781051 exec_runner.go:151] cp: inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 11:56:28.908568 1781051 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 11:56:28.908704 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1512164413 /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 11:56:28.908953 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2404200444 /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 11:56:28.910615 1781051 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 11:56:28.910652 1781051 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 11:56:28.910807 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1731946404 /etc/kubernetes/addons/deployment.yaml
	I0923 11:56:28.916505 1781051 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 11:56:28.916685 1781051 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 11:56:28.916712 1781051 exec_runner.go:151] cp: yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 11:56:28.916889 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1484558383 /etc/kubernetes/addons/yakd-svc.yaml
	I0923 11:56:28.923765 1781051 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 11:56:28.923923 1781051 exec_runner.go:151] cp: metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 11:56:28.924101 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3459553790 /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 11:56:28.927556 1781051 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96"
	I0923 11:56:28.927652 1781051 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96/freezer.state
	I0923 11:56:28.928350 1781051 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1782358/cgroup
	I0923 11:56:28.929427 1781051 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 11:56:28.933570 1781051 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 11:56:28.933603 1781051 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 11:56:28.933724 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2769275225 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 11:56:28.933892 1781051 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 11:56:28.937817 1781051 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 11:56:28.937858 1781051 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 11:56:28.938158 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2387216698 /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 11:56:28.945068 1781051 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 11:56:28.945207 1781051 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 11:56:28.945231 1781051 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 11:56:28.945384 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube523329633 /etc/kubernetes/addons/yakd-dp.yaml
	I0923 11:56:28.947185 1781051 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 11:56:28.947219 1781051 exec_runner.go:151] cp: inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 11:56:28.947392 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1862177572 /etc/kubernetes/addons/ig-role.yaml
	I0923 11:56:28.949806 1781051 api_server.go:204] freezer state: "THAWED"
	I0923 11:56:28.949838 1781051 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 11:56:28.951314 1781051 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96"
	I0923 11:56:28.951381 1781051 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96/freezer.state
	I0923 11:56:28.954928 1781051 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 11:56:28.957486 1781051 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 11:56:28.958989 1781051 out.go:177]   - Using image docker.io/busybox:stable
	I0923 11:56:28.960374 1781051 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 11:56:28.960412 1781051 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 11:56:28.960831 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3014098762 /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 11:56:28.962277 1781051 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 11:56:28.962327 1781051 exec_runner.go:151] cp: csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 11:56:28.962472 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3222059662 /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 11:56:28.967793 1781051 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 11:56:28.967832 1781051 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 11:56:28.967990 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2108059712 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 11:56:28.971047 1781051 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 11:56:28.976045 1781051 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 11:56:28.976108 1781051 exec_runner.go:151] cp: inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 11:56:28.976368 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2498991577 /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 11:56:28.984028 1781051 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 11:56:28.984065 1781051 exec_runner.go:151] cp: volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 11:56:28.984222 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube582029182 /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 11:56:28.999642 1781051 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 11:56:29.004641 1781051 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 11:56:29.004688 1781051 exec_runner.go:151] cp: volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 11:56:29.004951 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1099605000 /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 11:56:29.016352 1781051 api_server.go:204] freezer state: "THAWED"
	I0923 11:56:29.016393 1781051 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 11:56:29.017737 1781051 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 11:56:29.018627 1781051 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 11:56:29.018674 1781051 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 11:56:29.018813 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1920556255 /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 11:56:29.021726 1781051 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 11:56:29.021779 1781051 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 11:56:29.021797 1781051 exec_runner.go:144] found /etc/kubernetes/addons/storageclass.yaml, removing ...
	I0923 11:56:29.021804 1781051 exec_runner.go:203] rm: /etc/kubernetes/addons/storageclass.yaml
	I0923 11:56:29.021855 1781051 exec_runner.go:51] Run: sudo rm -f /etc/kubernetes/addons/storageclass.yaml
	I0923 11:56:29.045740 1781051 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 11:56:29.045781 1781051 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 11:56:29.045928 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3695983410 /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 11:56:29.070722 1781051 exec_runner.go:151] cp: storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 11:56:29.073183 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube740277350 /etc/kubernetes/addons/storageclass.yaml
	I0923 11:56:29.086843 1781051 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:56:29.086898 1781051 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 11:56:29.087100 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4127858415 /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:56:29.102678 1781051 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 11:56:29.102733 1781051 exec_runner.go:151] cp: inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 11:56:29.102902 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube988395450 /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 11:56:29.113083 1781051 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 11:56:29.113140 1781051 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 11:56:29.113297 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4289975789 /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 11:56:29.146425 1781051 exec_runner.go:51] Run: sudo systemctl start kubelet
	I0923 11:56:29.164511 1781051 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:56:29.166972 1781051 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 11:56:29.173857 1781051 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 11:56:29.173903 1781051 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 11:56:29.174758 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2739861927 /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 11:56:29.179310 1781051 node_ready.go:35] waiting up to 6m0s for node "ubuntu-20-agent-12" to be "Ready" ...
	I0923 11:56:29.183852 1781051 node_ready.go:49] node "ubuntu-20-agent-12" has status "Ready":"True"
	I0923 11:56:29.183886 1781051 node_ready.go:38] duration metric: took 4.542014ms for node "ubuntu-20-agent-12" to be "Ready" ...
	I0923 11:56:29.183903 1781051 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:56:29.195169 1781051 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-858gb" in "kube-system" namespace to be "Ready" ...
	I0923 11:56:29.233662 1781051 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 11:56:29.233707 1781051 exec_runner.go:151] cp: inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 11:56:29.233841 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1893707968 /etc/kubernetes/addons/ig-crd.yaml
	I0923 11:56:29.235044 1781051 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 11:56:29.235078 1781051 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 11:56:29.235233 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4152729914 /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 11:56:29.295201 1781051 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 11:56:29.295243 1781051 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 11:56:29.295393 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2663296266 /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 11:56:29.316130 1781051 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 11:56:29.318291 1781051 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 11:56:29.318336 1781051 exec_runner.go:151] cp: csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 11:56:29.318518 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube939407820 /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 11:56:29.346908 1781051 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 11:56:29.450622 1781051 start.go:971] {"host.minikube.internal": 127.0.0.1} host record injected into CoreDNS's ConfigMap
	I0923 11:56:29.964927 1781051 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas
	I0923 11:56:29.994166 1781051 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.049033355s)
	I0923 11:56:30.023367 1781051 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.089412665s)
	I0923 11:56:30.023412 1781051 addons.go:475] Verifying addon registry=true in "minikube"
	I0923 11:56:30.026872 1781051 out.go:177] * Verifying registry addon...
	I0923 11:56:30.030823 1781051 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 11:56:30.039406 1781051 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 11:56:30.039446 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:30.057492 1781051 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.086393014s)
	I0923 11:56:30.057529 1781051 addons.go:475] Verifying addon metrics-server=true in "minikube"
	I0923 11:56:30.383148 1781051 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.365347344s)
	I0923 11:56:30.536353 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:30.637454 1781051 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.637725843s)
	I0923 11:56:30.638700 1781051 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube service yakd-dashboard -n yakd-dashboard
	
	I0923 11:56:30.754904 1781051 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.438659575s)
	I0923 11:56:31.057962 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:31.136262 1781051 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.971680563s)
	W0923 11:56:31.136313 1781051 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 11:56:31.136352 1781051 retry.go:31] will retry after 237.361229ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: exit status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 11:56:31.203217 1781051 pod_ready.go:103] pod "coredns-7c65d6cfc9-858gb" in "kube-system" namespace has status "Ready":"False"
	I0923 11:56:31.374326 1781051 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 11:56:31.538121 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:31.712889 1781051 pod_ready.go:93] pod "coredns-7c65d6cfc9-858gb" in "kube-system" namespace has status "Ready":"True"
	I0923 11:56:31.712929 1781051 pod_ready.go:82] duration metric: took 2.517663168s for pod "coredns-7c65d6cfc9-858gb" in "kube-system" namespace to be "Ready" ...
	I0923 11:56:31.712944 1781051 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rwn2t" in "kube-system" namespace to be "Ready" ...
	I0923 11:56:32.035422 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:32.057386 1781051 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.710349903s)
	I0923 11:56:32.057586 1781051 addons.go:475] Verifying addon csi-hostpath-driver=true in "minikube"
	I0923 11:56:32.061318 1781051 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 11:56:32.064483 1781051 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 11:56:32.092164 1781051 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 11:56:32.092190 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:32.230161 1781051 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.313606145s)
	I0923 11:56:32.538041 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:32.570475 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:32.594217 1781051 exec_runner.go:84] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.219818471s)
	I0923 11:56:33.035484 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:33.070068 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:33.536402 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:33.570002 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:33.719880 1781051 pod_ready.go:103] pod "coredns-7c65d6cfc9-rwn2t" in "kube-system" namespace has status "Ready":"False"
	I0923 11:56:34.036429 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:34.138418 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:34.535864 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:34.570237 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:35.034646 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:35.070047 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:35.535746 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:35.569776 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:35.720224 1781051 pod_ready.go:103] pod "coredns-7c65d6cfc9-rwn2t" in "kube-system" namespace has status "Ready":"False"
	I0923 11:56:35.862274 1781051 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 11:56:35.862477 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube1090494032 /var/lib/minikube/google_application_credentials.json
	I0923 11:56:35.875499 1781051 exec_runner.go:151] cp: memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 11:56:35.875676 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube3666276499 /var/lib/minikube/google_cloud_project
	I0923 11:56:35.887979 1781051 addons.go:234] Setting addon gcp-auth=true in "minikube"
	I0923 11:56:35.888055 1781051 host.go:66] Checking if "minikube" exists ...
	I0923 11:56:35.888889 1781051 kubeconfig.go:125] found "minikube" server: "https://10.128.15.239:8443"
	I0923 11:56:35.888915 1781051 api_server.go:166] Checking apiserver status ...
	I0923 11:56:35.888959 1781051 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:56:35.911150 1781051 exec_runner.go:51] Run: sudo egrep ^[0-9]+:freezer: /proc/1782358/cgroup
	I0923 11:56:35.923108 1781051 api_server.go:182] apiserver freezer: "5:freezer:/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96"
	I0923 11:56:35.923187 1781051 exec_runner.go:51] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod14563d66a9462601c5ca6bf94851d9f2/6caa2c70aae5db4f1587e31d1a4bb6d571ac98105714ab4f6138f80772e5cb96/freezer.state
	I0923 11:56:35.933820 1781051 api_server.go:204] freezer state: "THAWED"
	I0923 11:56:35.933863 1781051 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 11:56:35.939353 1781051 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 11:56:35.939429 1781051 exec_runner.go:51] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 11:56:35.942787 1781051 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 11:56:35.944430 1781051 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 11:56:35.945893 1781051 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 11:56:35.945948 1781051 exec_runner.go:151] cp: gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 11:56:35.946117 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2000734967 /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 11:56:35.959109 1781051 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 11:56:35.959247 1781051 exec_runner.go:151] cp: gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 11:56:35.959427 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube2460837523 /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 11:56:35.971586 1781051 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 11:56:35.971620 1781051 exec_runner.go:151] cp: memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 11:56:35.971784 1781051 exec_runner.go:51] Run: sudo cp -a /tmp/minikube4100877370 /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 11:56:35.984046 1781051 exec_runner.go:51] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 11:56:36.035925 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:36.070354 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:36.572864 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:36.573415 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:36.836415 1781051 addons.go:475] Verifying addon gcp-auth=true in "minikube"
	I0923 11:56:36.869652 1781051 out.go:177] * Verifying gcp-auth addon...
	I0923 11:56:36.889007 1781051 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 11:56:36.891664 1781051 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 11:56:37.036094 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:37.071268 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:37.535327 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:37.569395 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:37.719068 1781051 pod_ready.go:98] pod "coredns-7c65d6cfc9-rwn2t" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:56:37 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:56:28 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:56:28 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:56:28 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:56:28 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.128.15.239 HostIPs:[{IP:10.128.15.2
39}] PodIP:10.244.0.4 PodIPs:[{IP:10.244.0.4}] StartTime:2024-09-23 11:56:28 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-23 11:56:30 +0000 UTC,FinishedAt:2024-09-23 11:56:36 +0000 UTC,ContainerID:docker://c7cfc0227d384f2b4c713a052a4c1a02536a0920786efb275d4e96e0e81ddacc,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://c7cfc0227d384f2b4c713a052a4c1a02536a0920786efb275d4e96e0e81ddacc Started:0xc0020ebcd0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00225b580} {Name:kube-api-access-bbgc5 MountPath:/var/run/secrets/kubernetes.io/serviceaccou
nt ReadOnly:true RecursiveReadOnly:0xc00225b590}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0923 11:56:37.719104 1781051 pod_ready.go:82] duration metric: took 6.006150082s for pod "coredns-7c65d6cfc9-rwn2t" in "kube-system" namespace to be "Ready" ...
	E0923 11:56:37.719119 1781051 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-rwn2t" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:56:37 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:56:28 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:56:28 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:56:28 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 11:56:28 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.128.15
.239 HostIPs:[{IP:10.128.15.239}] PodIP:10.244.0.4 PodIPs:[{IP:10.244.0.4}] StartTime:2024-09-23 11:56:28 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-23 11:56:30 +0000 UTC,FinishedAt:2024-09-23 11:56:36 +0000 UTC,ContainerID:docker://c7cfc0227d384f2b4c713a052a4c1a02536a0920786efb275d4e96e0e81ddacc,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://c7cfc0227d384f2b4c713a052a4c1a02536a0920786efb275d4e96e0e81ddacc Started:0xc0020ebcd0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00225b580} {Name:kube-api-access-bbgc5 MountPath:/var/run/secre
ts/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc00225b590}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0923 11:56:37.719134 1781051 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 11:56:37.723678 1781051 pod_ready.go:93] pod "etcd-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"True"
	I0923 11:56:37.723700 1781051 pod_ready.go:82] duration metric: took 4.557597ms for pod "etcd-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 11:56:37.723710 1781051 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 11:56:37.728706 1781051 pod_ready.go:93] pod "kube-apiserver-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"True"
	I0923 11:56:37.728748 1781051 pod_ready.go:82] duration metric: took 5.029044ms for pod "kube-apiserver-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 11:56:37.728762 1781051 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 11:56:37.733354 1781051 pod_ready.go:93] pod "kube-controller-manager-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"True"
	I0923 11:56:37.733379 1781051 pod_ready.go:82] duration metric: took 4.607971ms for pod "kube-controller-manager-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 11:56:37.733392 1781051 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b2mg2" in "kube-system" namespace to be "Ready" ...
	I0923 11:56:37.738183 1781051 pod_ready.go:93] pod "kube-proxy-b2mg2" in "kube-system" namespace has status "Ready":"True"
	I0923 11:56:37.738209 1781051 pod_ready.go:82] duration metric: took 4.807531ms for pod "kube-proxy-b2mg2" in "kube-system" namespace to be "Ready" ...
	I0923 11:56:37.738221 1781051 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 11:56:38.034990 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:38.070311 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:38.117442 1781051 pod_ready.go:93] pod "kube-scheduler-ubuntu-20-agent-12" in "kube-system" namespace has status "Ready":"True"
	I0923 11:56:38.117473 1781051 pod_ready.go:82] duration metric: took 379.242263ms for pod "kube-scheduler-ubuntu-20-agent-12" in "kube-system" namespace to be "Ready" ...
	I0923 11:56:38.117542 1781051 pod_ready.go:39] duration metric: took 8.933560443s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 11:56:38.117576 1781051 api_server.go:52] waiting for apiserver process to appear ...
	I0923 11:56:38.117654 1781051 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:56:38.138684 1781051 api_server.go:72] duration metric: took 9.452412663s to wait for apiserver process to appear ...
	I0923 11:56:38.138724 1781051 api_server.go:88] waiting for apiserver healthz status ...
	I0923 11:56:38.138753 1781051 api_server.go:253] Checking apiserver healthz at https://10.128.15.239:8443/healthz ...
	I0923 11:56:38.142930 1781051 api_server.go:279] https://10.128.15.239:8443/healthz returned 200:
	ok
	I0923 11:56:38.144052 1781051 api_server.go:141] control plane version: v1.31.1
	I0923 11:56:38.144085 1781051 api_server.go:131] duration metric: took 5.352852ms to wait for apiserver health ...
	I0923 11:56:38.144097 1781051 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 11:56:38.323495 1781051 system_pods.go:59] 16 kube-system pods found
	I0923 11:56:38.323538 1781051 system_pods.go:61] "coredns-7c65d6cfc9-858gb" [c9c5c012-fc08-4fae-9a44-5abf4436ad32] Running
	I0923 11:56:38.323549 1781051 system_pods.go:61] "csi-hostpath-attacher-0" [fa3a9d5c-9c88-4f53-ae8a-e4c58ae322d2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 11:56:38.323556 1781051 system_pods.go:61] "csi-hostpath-resizer-0" [e389cffb-0833-4dfb-8509-951202e0d210] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 11:56:38.323564 1781051 system_pods.go:61] "csi-hostpathplugin-x6n6k" [af382ec7-0cf4-4c49-ba02-accd4a1c899b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 11:56:38.323568 1781051 system_pods.go:61] "etcd-ubuntu-20-agent-12" [b2de5e95-1e8d-478d-9d64-12788768093d] Running
	I0923 11:56:38.323572 1781051 system_pods.go:61] "kube-apiserver-ubuntu-20-agent-12" [4d0c5099-e667-48d0-932a-42fd293822d9] Running
	I0923 11:56:38.323577 1781051 system_pods.go:61] "kube-controller-manager-ubuntu-20-agent-12" [4cad5f2d-2405-4330-8627-05bab014b1f2] Running
	I0923 11:56:38.323580 1781051 system_pods.go:61] "kube-proxy-b2mg2" [74ceab05-3405-441e-beee-4ee25149148a] Running
	I0923 11:56:38.323588 1781051 system_pods.go:61] "kube-scheduler-ubuntu-20-agent-12" [4128fec1-e855-4ed6-812a-44de85b9d073] Running
	I0923 11:56:38.323594 1781051 system_pods.go:61] "metrics-server-84c5f94fbc-7b8pp" [cf935ea6-5c12-4817-a26f-dda89bea406b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 11:56:38.323597 1781051 system_pods.go:61] "nvidia-device-plugin-daemonset-7dk7q" [65dd7ae4-8196-4876-ab03-51353f34538f] Running
	I0923 11:56:38.323604 1781051 system_pods.go:61] "registry-66c9cd494c-jmxwm" [93925dd9-5243-4f8b-a2b4-53a5b7b50bcb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 11:56:38.323635 1781051 system_pods.go:61] "registry-proxy-bxrn8" [88691c9c-8d3d-46c2-8169-7df6164800f7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 11:56:38.323641 1781051 system_pods.go:61] "snapshot-controller-56fcc65765-929ll" [a7d184a0-3c06-4e47-9bc1-4cbab4dbffbb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:56:38.323649 1781051 system_pods.go:61] "snapshot-controller-56fcc65765-wbg4f" [44fe1a77-12d5-4710-8b33-feb7303e1260] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:56:38.323654 1781051 system_pods.go:61] "storage-provisioner" [41b6ca81-19a1-4756-ac25-a8cbf7069b60] Running
	I0923 11:56:38.323661 1781051 system_pods.go:74] duration metric: took 179.557084ms to wait for pod list to return data ...
	I0923 11:56:38.323669 1781051 default_sa.go:34] waiting for default service account to be created ...
	I0923 11:56:38.517009 1781051 default_sa.go:45] found service account: "default"
	I0923 11:56:38.517044 1781051 default_sa.go:55] duration metric: took 193.36575ms for default service account to be created ...
	I0923 11:56:38.517057 1781051 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 11:56:38.534694 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:38.570139 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:38.722865 1781051 system_pods.go:86] 16 kube-system pods found
	I0923 11:56:38.722903 1781051 system_pods.go:89] "coredns-7c65d6cfc9-858gb" [c9c5c012-fc08-4fae-9a44-5abf4436ad32] Running
	I0923 11:56:38.722913 1781051 system_pods.go:89] "csi-hostpath-attacher-0" [fa3a9d5c-9c88-4f53-ae8a-e4c58ae322d2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 11:56:38.722921 1781051 system_pods.go:89] "csi-hostpath-resizer-0" [e389cffb-0833-4dfb-8509-951202e0d210] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 11:56:38.722929 1781051 system_pods.go:89] "csi-hostpathplugin-x6n6k" [af382ec7-0cf4-4c49-ba02-accd4a1c899b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 11:56:38.722934 1781051 system_pods.go:89] "etcd-ubuntu-20-agent-12" [b2de5e95-1e8d-478d-9d64-12788768093d] Running
	I0923 11:56:38.722939 1781051 system_pods.go:89] "kube-apiserver-ubuntu-20-agent-12" [4d0c5099-e667-48d0-932a-42fd293822d9] Running
	I0923 11:56:38.722946 1781051 system_pods.go:89] "kube-controller-manager-ubuntu-20-agent-12" [4cad5f2d-2405-4330-8627-05bab014b1f2] Running
	I0923 11:56:38.722952 1781051 system_pods.go:89] "kube-proxy-b2mg2" [74ceab05-3405-441e-beee-4ee25149148a] Running
	I0923 11:56:38.722959 1781051 system_pods.go:89] "kube-scheduler-ubuntu-20-agent-12" [4128fec1-e855-4ed6-812a-44de85b9d073] Running
	I0923 11:56:38.722971 1781051 system_pods.go:89] "metrics-server-84c5f94fbc-7b8pp" [cf935ea6-5c12-4817-a26f-dda89bea406b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 11:56:38.722981 1781051 system_pods.go:89] "nvidia-device-plugin-daemonset-7dk7q" [65dd7ae4-8196-4876-ab03-51353f34538f] Running
	I0923 11:56:38.722988 1781051 system_pods.go:89] "registry-66c9cd494c-jmxwm" [93925dd9-5243-4f8b-a2b4-53a5b7b50bcb] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0923 11:56:38.722993 1781051 system_pods.go:89] "registry-proxy-bxrn8" [88691c9c-8d3d-46c2-8169-7df6164800f7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 11:56:38.723008 1781051 system_pods.go:89] "snapshot-controller-56fcc65765-929ll" [a7d184a0-3c06-4e47-9bc1-4cbab4dbffbb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:56:38.723017 1781051 system_pods.go:89] "snapshot-controller-56fcc65765-wbg4f" [44fe1a77-12d5-4710-8b33-feb7303e1260] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 11:56:38.723024 1781051 system_pods.go:89] "storage-provisioner" [41b6ca81-19a1-4756-ac25-a8cbf7069b60] Running
	I0923 11:56:38.723034 1781051 system_pods.go:126] duration metric: took 205.969306ms to wait for k8s-apps to be running ...
	I0923 11:56:38.723046 1781051 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 11:56:38.723109 1781051 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
	I0923 11:56:38.738680 1781051 system_svc.go:56] duration metric: took 15.616793ms WaitForService to wait for kubelet
	I0923 11:56:38.738716 1781051 kubeadm.go:582] duration metric: took 10.052454629s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 11:56:38.738743 1781051 node_conditions.go:102] verifying NodePressure condition ...
	I0923 11:56:38.917746 1781051 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0923 11:56:38.917777 1781051 node_conditions.go:123] node cpu capacity is 8
	I0923 11:56:38.917790 1781051 node_conditions.go:105] duration metric: took 179.041372ms to run NodePressure ...
	I0923 11:56:38.917802 1781051 start.go:241] waiting for startup goroutines ...
	I0923 11:56:39.035769 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:39.069668 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:39.535183 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:39.568811 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:40.034371 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:40.068935 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:40.535130 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:40.569283 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:41.034173 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:41.069513 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:41.535525 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:41.569648 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:42.035528 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:42.069157 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:42.534851 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:42.574770 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:43.034772 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:43.069652 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:43.534807 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:43.569901 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:44.036050 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 11:56:44.137162 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:44.535084 1781051 kapi.go:107] duration metric: took 14.504262566s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 11:56:44.569120 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:45.068986 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:45.570042 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:46.069783 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:46.569509 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:47.068409 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:47.570182 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:48.069522 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:48.569555 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:49.069876 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:49.574266 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:50.069418 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:50.570217 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:51.068813 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:51.570599 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:52.069897 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:52.586303 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:53.070166 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:53.570415 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:54.080171 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:54.570440 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:55.069803 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:55.570127 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:56.069336 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:56.569831 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:57.069056 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:57.569445 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:58.069782 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:58.569160 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:59.069487 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:56:59.570984 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:57:00.069498 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:57:00.569027 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:57:01.069844 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:57:01.569873 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:57:02.069237 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:57:02.569604 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 11:57:03.068907 1781051 kapi.go:107] duration metric: took 31.004433354s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 11:57:17.895303 1781051 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 11:57:17.895331 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:18.392555 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:18.893341 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:19.392701 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:19.893236 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:20.393263 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:20.893402 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:21.392596 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:21.892995 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:22.392790 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:22.893428 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:23.392813 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:23.893284 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:24.392465 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:24.892952 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:25.392901 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:25.892648 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:26.393099 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:26.892811 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:27.393513 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:27.892214 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:28.392880 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:28.892717 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:29.392884 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:29.893541 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:30.393042 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:30.892677 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:31.392671 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:31.892992 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:32.393024 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:32.892710 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:33.393090 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:33.893607 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:34.392338 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:34.892809 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:35.393158 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:35.893067 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:36.392883 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:36.892937 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:37.392220 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:37.892852 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:38.392634 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:38.893098 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:39.392330 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:39.892209 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:40.394820 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:40.892910 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:41.393624 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:41.893123 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:42.392374 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:42.892466 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:43.393281 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:43.893313 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:44.392498 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:44.892442 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:45.393076 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:45.892340 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:46.392403 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:46.892594 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:47.394311 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:47.893062 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:48.393243 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:48.891831 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:49.393097 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:49.893729 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:50.393642 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:50.893253 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:51.393287 1781051 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 11:57:51.892829 1781051 kapi.go:107] duration metric: took 1m15.003823693s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 11:57:51.894443 1781051 out.go:177] * Your GCP credentials will now be mounted into every pod created in the minikube cluster.
	I0923 11:57:51.895557 1781051 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 11:57:51.896709 1781051 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 11:57:51.898147 1781051 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, storage-provisioner, metrics-server, storage-provisioner-rancher, yakd, inspektor-gadget, volcano, volumesnapshots, registry, csi-hostpath-driver, gcp-auth
	I0923 11:57:51.899301 1781051 addons.go:510] duration metric: took 1m23.218450487s for enable addons: enabled=[nvidia-device-plugin cloud-spanner default-storageclass storage-provisioner metrics-server storage-provisioner-rancher yakd inspektor-gadget volcano volumesnapshots registry csi-hostpath-driver gcp-auth]
	I0923 11:57:51.899347 1781051 start.go:246] waiting for cluster config update ...
	I0923 11:57:51.899370 1781051 start.go:255] writing updated cluster config ...
	I0923 11:57:51.899676 1781051 exec_runner.go:51] Run: rm -f paused
	I0923 11:57:51.950503 1781051 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 11:57:51.952219 1781051 out.go:177] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
	
	
	==> Docker <==
	-- Logs begin at Fri 2024-08-02 09:11:33 UTC, end at Mon 2024-09-23 12:07:43 UTC. --
	Sep 23 11:58:51 ubuntu-20-agent-12 dockerd[1781267]: time="2024-09-23T11:58:51.011627502Z" level=info msg="ignoring event" container=b32abe9c5622eec15dd32f2d8e5ab65fc86937b6cdf775e385f46dab61c1d222 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:59:09 ubuntu-20-agent-12 dockerd[1781267]: time="2024-09-23T11:59:09.492575020Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=6f9d5951e94e5dbc traceID=244969c45e174dc4f34f7483613a40a7
	Sep 23 11:59:09 ubuntu-20-agent-12 dockerd[1781267]: time="2024-09-23T11:59:09.494493142Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=6f9d5951e94e5dbc traceID=244969c45e174dc4f34f7483613a40a7
	Sep 23 11:59:56 ubuntu-20-agent-12 cri-dockerd[1781596]: time="2024-09-23T11:59:56Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 23 11:59:58 ubuntu-20-agent-12 dockerd[1781267]: time="2024-09-23T11:59:58.189787793Z" level=info msg="ignoring event" container=493ec38216df41eaa877fcd966ecefdc1d0fcba8fb04a9bd9c91531c19451c5e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 11:59:59 ubuntu-20-agent-12 dockerd[1781267]: time="2024-09-23T11:59:59.496655665Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=c81a5c7b57a31573 traceID=f6fdf2802f139fa87ca6478f03cfca99
	Sep 23 11:59:59 ubuntu-20-agent-12 dockerd[1781267]: time="2024-09-23T11:59:59.499028954Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=c81a5c7b57a31573 traceID=f6fdf2802f139fa87ca6478f03cfca99
	Sep 23 12:01:20 ubuntu-20-agent-12 dockerd[1781267]: time="2024-09-23T12:01:20.496591229Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=f72df7acfea7d927 traceID=1321e2df9c27022af5614dc761330b1e
	Sep 23 12:01:20 ubuntu-20-agent-12 dockerd[1781267]: time="2024-09-23T12:01:20.498764409Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=f72df7acfea7d927 traceID=1321e2df9c27022af5614dc761330b1e
	Sep 23 12:02:42 ubuntu-20-agent-12 cri-dockerd[1781596]: time="2024-09-23T12:02:42Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 23 12:02:44 ubuntu-20-agent-12 dockerd[1781267]: time="2024-09-23T12:02:44.225338203Z" level=info msg="ignoring event" container=904d5f3b1a9217a53eaf47172f0f7d7962d4d8ccbb04fcb1500f8f0519c8f634 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 12:04:07 ubuntu-20-agent-12 dockerd[1781267]: time="2024-09-23T12:04:07.488013294Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=5e989a46a991ad22 traceID=1a118c81cde835655479f45816d6173b
	Sep 23 12:04:07 ubuntu-20-agent-12 dockerd[1781267]: time="2024-09-23T12:04:07.491011861Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc\": unauthorized: authentication failed" spanID=5e989a46a991ad22 traceID=1a118c81cde835655479f45816d6173b
	Sep 23 12:06:43 ubuntu-20-agent-12 cri-dockerd[1781596]: time="2024-09-23T12:06:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cc3c29f2f40ee28eb6c7f3f3092ae4c92acd588d18346f9af7d96d2a685bb9a8/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-central1-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Sep 23 12:06:43 ubuntu-20-agent-12 dockerd[1781267]: time="2024-09-23T12:06:43.156571397Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=1197883cdcc45121 traceID=0341745fe92aabee635b1146255083eb
	Sep 23 12:06:43 ubuntu-20-agent-12 dockerd[1781267]: time="2024-09-23T12:06:43.158902941Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=1197883cdcc45121 traceID=0341745fe92aabee635b1146255083eb
	Sep 23 12:06:54 ubuntu-20-agent-12 dockerd[1781267]: time="2024-09-23T12:06:54.489621621Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=ee33554c14137b5d traceID=24abe3850e642ad9f109765cc46fe241
	Sep 23 12:06:54 ubuntu-20-agent-12 dockerd[1781267]: time="2024-09-23T12:06:54.491920997Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=ee33554c14137b5d traceID=24abe3850e642ad9f109765cc46fe241
	Sep 23 12:07:20 ubuntu-20-agent-12 dockerd[1781267]: time="2024-09-23T12:07:20.494475296Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=630a9407deccefbe traceID=fb3a39fda8c385c8851f3af3e41c38ba
	Sep 23 12:07:20 ubuntu-20-agent-12 dockerd[1781267]: time="2024-09-23T12:07:20.496640595Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=630a9407deccefbe traceID=fb3a39fda8c385c8851f3af3e41c38ba
	Sep 23 12:07:42 ubuntu-20-agent-12 dockerd[1781267]: time="2024-09-23T12:07:42.706040711Z" level=info msg="ignoring event" container=cc3c29f2f40ee28eb6c7f3f3092ae4c92acd588d18346f9af7d96d2a685bb9a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 12:07:43 ubuntu-20-agent-12 dockerd[1781267]: time="2024-09-23T12:07:43.019821366Z" level=info msg="ignoring event" container=58c39841a6bac8efd5d11c4a1c09172ee85e6f2620621ea0056acc8a6cc14ee9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 12:07:43 ubuntu-20-agent-12 dockerd[1781267]: time="2024-09-23T12:07:43.090399416Z" level=info msg="ignoring event" container=64574e3a4c3d9f0094e8a5aa93fcaf6eb81eeac4f513178f8bd44064801c6fd1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 12:07:43 ubuntu-20-agent-12 dockerd[1781267]: time="2024-09-23T12:07:43.166092245Z" level=info msg="ignoring event" container=38369a999b0d231f4d0a47f760cfbbb2463ce6d93099cad97b6e9a42c7e2bc53 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 12:07:43 ubuntu-20-agent-12 dockerd[1781267]: time="2024-09-23T12:07:43.263643219Z" level=info msg="ignoring event" container=9c3d2160999c8669d909230cdb682c6756f3a58ac607f076c25113510132a4fb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	904d5f3b1a921       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            5 minutes ago       Exited              gadget                                   6                   aea95ba9b37d2       gadget-grxjc
	39420759bb826       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   825e10d876c29       gcp-auth-89d5ffd79-bstk2
	d5ac883710510       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          10 minutes ago      Running             csi-snapshotter                          0                   8faf5688c5d47       csi-hostpathplugin-x6n6k
	f38a05787819c       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          10 minutes ago      Running             csi-provisioner                          0                   8faf5688c5d47       csi-hostpathplugin-x6n6k
	0643a209367e7       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            10 minutes ago      Running             liveness-probe                           0                   8faf5688c5d47       csi-hostpathplugin-x6n6k
	2eeccc40aaeef       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           10 minutes ago      Running             hostpath                                 0                   8faf5688c5d47       csi-hostpathplugin-x6n6k
	3230170c657ef       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                10 minutes ago      Running             node-driver-registrar                    0                   8faf5688c5d47       csi-hostpathplugin-x6n6k
	10572704ed783       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   10 minutes ago      Running             csi-external-health-monitor-controller   0                   8faf5688c5d47       csi-hostpathplugin-x6n6k
	568fc0109d513       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              10 minutes ago      Running             csi-resizer                              0                   d3e48434b41ba       csi-hostpath-resizer-0
	ddc033cf4883e       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             10 minutes ago      Running             csi-attacher                             0                   54213d2572c24       csi-hostpath-attacher-0
	a849872cf0c86       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   3bc4c2a2b04f7       snapshot-controller-56fcc65765-wbg4f
	7737f63973612       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      10 minutes ago      Running             volume-snapshot-controller               0                   aa6d715db16db       snapshot-controller-56fcc65765-929ll
	1be61b729a189       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        10 minutes ago      Running             yakd                                     0                   c5c34d81a2182       yakd-dashboard-67d98fc6b-ct5tq
	5b7c52ee64223       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       10 minutes ago      Running             local-path-provisioner                   0                   352856a4130b9       local-path-provisioner-86d989889c-267mq
	64574e3a4c3d9       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              11 minutes ago      Exited              registry-proxy                           0                   9c3d2160999c8       registry-proxy-bxrn8
	58c39841a6bac       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                                             11 minutes ago      Exited              registry                                 0                   38369a999b0d2       registry-66c9cd494c-jmxwm
	79f3c398e314f       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        11 minutes ago      Running             metrics-server                           0                   70f422b71f331       metrics-server-84c5f94fbc-7b8pp
	6c16f74a6545c       gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e                               11 minutes ago      Running             cloud-spanner-emulator                   0                   e79d61680fa8c       cloud-spanner-emulator-5b584cc74-rjtsr
	350f102b9651b       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                                     11 minutes ago      Running             nvidia-device-plugin-ctr                 0                   667567adc3fc0       nvidia-device-plugin-daemonset-7dk7q
	bb8d6ad9a9236       6e38f40d628db                                                                                                                                11 minutes ago      Running             storage-provisioner                      0                   ffd312a586e37       storage-provisioner
	aab5a623557ed       c69fa2e9cbf5f                                                                                                                                11 minutes ago      Running             coredns                                  0                   470898f82c1ec       coredns-7c65d6cfc9-858gb
	2ec4580b5bc06       60c005f310ff3                                                                                                                                11 minutes ago      Running             kube-proxy                               0                   f8fcbed5ac59d       kube-proxy-b2mg2
	6caa2c70aae5d       6bab7719df100                                                                                                                                11 minutes ago      Running             kube-apiserver                           0                   75728157b19a2       kube-apiserver-ubuntu-20-agent-12
	f1bc932285775       2e96e5913fc06                                                                                                                                11 minutes ago      Running             etcd                                     0                   9747faff6715c       etcd-ubuntu-20-agent-12
	95bdf4609da83       9aa1fad941575                                                                                                                                11 minutes ago      Running             kube-scheduler                           0                   2c7b505d2aa7e       kube-scheduler-ubuntu-20-agent-12
	e512e63b87633       175ffd71cce3d                                                                                                                                11 minutes ago      Running             kube-controller-manager                  0                   6521bb50eb522       kube-controller-manager-ubuntu-20-agent-12
	
	
	==> coredns [aab5a623557e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7cdff32fc9c56df278621e3df8c1fd38e90c1c6357bf9c78282ddfe67ac8fc01159ee42f7229906198d471a617bf80a893de29f65c21937e1e5596cf6a48e762
	[INFO] Reloading complete
	[INFO] 127.0.0.1:52275 - 27859 "HINFO IN 2826684241187033425.1811380730309944732. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.041632705s
	[INFO] 10.244.0.24:36597 - 7913 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000350064s
	[INFO] 10.244.0.24:56581 - 49299 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000438516s
	[INFO] 10.244.0.24:45018 - 46386 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000198979s
	[INFO] 10.244.0.24:55733 - 16402 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000208283s
	[INFO] 10.244.0.24:59369 - 57718 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125173s
	[INFO] 10.244.0.24:39504 - 2262 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000149026s
	[INFO] 10.244.0.24:41342 - 29591 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.003504227s
	[INFO] 10.244.0.24:36279 - 23913 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.003644842s
	[INFO] 10.244.0.24:37357 - 14120 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004219634s
	[INFO] 10.244.0.24:37041 - 39467 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004349191s
	[INFO] 10.244.0.24:39559 - 4133 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.002508691s
	[INFO] 10.244.0.24:40685 - 31146 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005643457s
	[INFO] 10.244.0.24:54168 - 54930 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001351908s
	[INFO] 10.244.0.24:33860 - 33080 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001233244s
	
	
	==> describe nodes <==
	Name:               ubuntu-20-agent-12
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ubuntu-20-agent-12
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=30f673d6edb6d12f8aba2f7e30667ea1b6d205d1
	                    minikube.k8s.io/name=minikube
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T11_56_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=ubuntu-20-agent-12
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"ubuntu-20-agent-12"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 11:56:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ubuntu-20-agent-12
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 12:07:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 12:03:30 +0000   Mon, 23 Sep 2024 11:56:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 12:03:30 +0000   Mon, 23 Sep 2024 11:56:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 12:03:30 +0000   Mon, 23 Sep 2024 11:56:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 12:03:30 +0000   Mon, 23 Sep 2024 11:56:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.128.15.239
	  Hostname:    ubuntu-20-agent-12
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 591c9f1229383743e2bfc56a050d43d1
	  System UUID:                26e2d22b-def2-c216-b2a9-007020fa8ce7
	  Boot ID:                    83656df0-482a-417d-b7fc-90bc5fb37652
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 20.04.6 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  default                     cloud-spanner-emulator-5b584cc74-rjtsr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gadget                      gadget-grxjc                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  gcp-auth                    gcp-auth-89d5ffd79-bstk2                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-858gb                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 csi-hostpath-attacher-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpath-resizer-0                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 csi-hostpathplugin-x6n6k                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ubuntu-20-agent-12                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kube-apiserver-ubuntu-20-agent-12             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ubuntu-20-agent-12    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-b2mg2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ubuntu-20-agent-12             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-84c5f94fbc-7b8pp               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-7dk7q          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-929ll          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 snapshot-controller-56fcc65765-wbg4f          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-267mq       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-ct5tq                0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             498Mi (1%)  426Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 11m   kube-proxy       
	  Normal   Starting                 11m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  11m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m   kubelet          Node ubuntu-20-agent-12 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m   kubelet          Node ubuntu-20-agent-12 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m   kubelet          Node ubuntu-20-agent-12 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m   node-controller  Node ubuntu-20-agent-12 event: Registered Node ubuntu-20-agent-12 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 1d a5 95 e3 3a 08 06
	[  +0.036803] IPv4: martian source 10.244.0.1 from 10.244.0.14, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1a 18 28 f3 24 b3 08 06
	[  +2.158115] IPv4: martian source 10.244.0.1 from 10.244.0.15, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 49 91 2f aa 03 08 06
	[  +1.105904] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e2 c3 0f ee 1d f0 08 06
	[  +3.212779] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 3e 11 08 c4 fc 08 06
	[  +2.674527] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 3e df 57 c9 5a 08 06
	[  +0.348835] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 82 11 01 70 7d 81 08 06
	[  +0.188304] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 5c 7e 32 20 6b 08 06
	[  +4.787029] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 68 3c 23 8e 4f 08 06
	[Sep23 11:57] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0e d0 9f 61 de e7 08 06
	[  +0.029011] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 c1 e6 80 d5 bf 08 06
	[  +9.923769] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a6 ad a2 b4 76 6a 08 06
	[  +0.000542] IPv4: martian source 10.244.0.24 from 10.244.0.5, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 8a 72 f7 f8 17 55 08 06
	
	
	==> etcd [f1bc93228577] <==
	{"level":"info","ts":"2024-09-23T11:56:19.372596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dd041fa4dc6d4aac switched to configuration voters=(15925888975222164140)"}
	{"level":"info","ts":"2024-09-23T11:56:19.372778Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c05a044d5786a1e7","local-member-id":"dd041fa4dc6d4aac","added-peer-id":"dd041fa4dc6d4aac","added-peer-peer-urls":["https://10.128.15.239:2380"]}
	{"level":"info","ts":"2024-09-23T11:56:19.958485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dd041fa4dc6d4aac is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-23T11:56:19.958584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dd041fa4dc6d4aac became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-23T11:56:19.958604Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dd041fa4dc6d4aac received MsgPreVoteResp from dd041fa4dc6d4aac at term 1"}
	{"level":"info","ts":"2024-09-23T11:56:19.958617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dd041fa4dc6d4aac became candidate at term 2"}
	{"level":"info","ts":"2024-09-23T11:56:19.958623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dd041fa4dc6d4aac received MsgVoteResp from dd041fa4dc6d4aac at term 2"}
	{"level":"info","ts":"2024-09-23T11:56:19.958741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dd041fa4dc6d4aac became leader at term 2"}
	{"level":"info","ts":"2024-09-23T11:56:19.958754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dd041fa4dc6d4aac elected leader dd041fa4dc6d4aac at term 2"}
	{"level":"info","ts":"2024-09-23T11:56:19.960013Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:56:19.960045Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T11:56:19.960015Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"dd041fa4dc6d4aac","local-member-attributes":"{Name:ubuntu-20-agent-12 ClientURLs:[https://10.128.15.239:2379]}","request-path":"/0/members/dd041fa4dc6d4aac/attributes","cluster-id":"c05a044d5786a1e7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T11:56:19.960035Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:56:19.960392Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T11:56:19.960420Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T11:56:19.960965Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c05a044d5786a1e7","local-member-id":"dd041fa4dc6d4aac","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:56:19.961067Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:56:19.961107Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T11:56:19.961362Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T11:56:19.961786Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T11:56:19.962468Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T11:56:19.962908Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"10.128.15.239:2379"}
	{"level":"info","ts":"2024-09-23T12:06:19.980469Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1689}
	{"level":"info","ts":"2024-09-23T12:06:20.005517Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1689,"took":"24.467973ms","hash":1671387116,"current-db-size-bytes":7839744,"current-db-size":"7.8 MB","current-db-size-in-use-bytes":4198400,"current-db-size-in-use":"4.2 MB"}
	{"level":"info","ts":"2024-09-23T12:06:20.005577Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1671387116,"revision":1689,"compact-revision":-1}
	
	
	==> gcp-auth [39420759bb82] <==
	2024/09/23 11:57:51 GCP Auth Webhook started!
	2024/09/23 11:58:08 Ready to marshal response ...
	2024/09/23 11:58:08 Ready to write response ...
	2024/09/23 11:58:09 Ready to marshal response ...
	2024/09/23 11:58:09 Ready to write response ...
	2024/09/23 11:58:30 Ready to marshal response ...
	2024/09/23 11:58:30 Ready to write response ...
	2024/09/23 11:58:30 Ready to marshal response ...
	2024/09/23 11:58:30 Ready to write response ...
	2024/09/23 11:58:30 Ready to marshal response ...
	2024/09/23 11:58:30 Ready to write response ...
	2024/09/23 12:06:42 Ready to marshal response ...
	2024/09/23 12:06:42 Ready to write response ...
	
	
	==> kernel <==
	 12:07:43 up 1 day, 17:50,  0 users,  load average: 0.15, 0.36, 0.72
	Linux ubuntu-20-agent-12 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.6 LTS"
	
	
	==> kube-apiserver [6caa2c70aae5] <==
	W0923 11:57:10.939128       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.255.93:443: connect: connection refused
	W0923 11:57:17.844102       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.47.184:443: connect: connection refused
	E0923 11:57:17.844153       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.47.184:443: connect: connection refused" logger="UnhandledError"
	W0923 11:57:39.811368       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.47.184:443: connect: connection refused
	E0923 11:57:39.811420       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.47.184:443: connect: connection refused" logger="UnhandledError"
	W0923 11:57:39.874395       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.106.47.184:443: connect: connection refused
	E0923 11:57:39.874441       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.106.47.184:443: connect: connection refused" logger="UnhandledError"
	I0923 11:58:08.222485       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0923 11:58:08.242600       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0923 11:58:20.652897       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0923 11:58:20.680259       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0923 11:58:20.809007       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0923 11:58:20.814687       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0923 11:58:20.827438       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0923 11:58:20.960414       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0923 11:58:21.004054       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0923 11:58:21.021639       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0923 11:58:21.046956       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0923 11:58:21.721695       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0923 11:58:21.851781       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0923 11:58:21.960554       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0923 11:58:22.047180       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0923 11:58:22.047204       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0923 11:58:22.116282       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0923 11:58:22.243078       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	
	
	==> kube-controller-manager [e512e63b8763] <==
	W0923 12:06:22.210018       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:06:22.210068       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:06:22.531524       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:06:22.531575       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:06:45.447084       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:06:45.447150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:06:45.700322       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:06:45.700375       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:06:49.797432       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:06:49.797484       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:06:55.855261       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:06:55.855313       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:06:56.421957       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:06:56.422012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:06:59.177972       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:06:59.178035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:07:04.574225       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:07:04.574284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:07:26.339799       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:07:26.339853       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:07:34.481402       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:07:34.481452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 12:07:36.781540       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 12:07:36.781592       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 12:07:42.983700       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="8.767µs"
	
	
	==> kube-proxy [2ec4580b5bc0] <==
	I0923 11:56:29.556570       1 server_linux.go:66] "Using iptables proxy"
	I0923 11:56:29.858981       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["10.128.15.239"]
	E0923 11:56:29.869598       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 11:56:29.957983       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 11:56:29.958064       1 server_linux.go:169] "Using iptables Proxier"
	I0923 11:56:29.961639       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 11:56:29.962932       1 server.go:483] "Version info" version="v1.31.1"
	I0923 11:56:29.962959       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 11:56:29.965742       1 config.go:199] "Starting service config controller"
	I0923 11:56:29.965759       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 11:56:29.965782       1 config.go:105] "Starting endpoint slice config controller"
	I0923 11:56:29.965787       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 11:56:29.965921       1 config.go:328] "Starting node config controller"
	I0923 11:56:29.965944       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 11:56:30.065923       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0923 11:56:30.066005       1 shared_informer.go:320] Caches are synced for node config
	I0923 11:56:30.066022       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [95bdf4609da8] <==
	E0923 11:56:20.955123       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0923 11:56:20.955133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:56:20.955214       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 11:56:20.955233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:56:21.899330       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 11:56:21.899380       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:56:21.915892       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 11:56:21.915949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:56:22.031161       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 11:56:22.031206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:56:22.089191       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0923 11:56:22.089238       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 11:56:22.095682       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 11:56:22.095735       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 11:56:22.103212       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0923 11:56:22.103258       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:56:22.156922       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0923 11:56:22.156979       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:56:22.198494       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 11:56:22.198550       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:56:22.204090       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 11:56:22.204136       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 11:56:22.327514       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 11:56:22.327573       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0923 11:56:25.251008       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Logs begin at Fri 2024-08-02 09:11:33 UTC, end at Mon 2024-09-23 12:07:44 UTC. --
	Sep 23 12:07:34 ubuntu-20-agent-12 kubelet[1782491]: E0923 12:07:34.463603 1782491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="b0e8860a-03fe-4578-9d36-daca06418649"
	Sep 23 12:07:38 ubuntu-20-agent-12 kubelet[1782491]: E0923 12:07:38.464259 1782491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="c8c4b732-8bb1-48c8-9448-aa24554a71be"
	Sep 23 12:07:40 ubuntu-20-agent-12 kubelet[1782491]: I0923 12:07:40.462152 1782491 scope.go:117] "RemoveContainer" containerID="904d5f3b1a9217a53eaf47172f0f7d7962d4d8ccbb04fcb1500f8f0519c8f634"
	Sep 23 12:07:40 ubuntu-20-agent-12 kubelet[1782491]: E0923 12:07:40.462397 1782491 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-grxjc_gadget(fba7a151-42b4-4a6b-8826-0ab77cc66393)\"" pod="gadget/gadget-grxjc" podUID="fba7a151-42b4-4a6b-8826-0ab77cc66393"
	Sep 23 12:07:42 ubuntu-20-agent-12 kubelet[1782491]: I0923 12:07:42.914734 1782491 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmx45\" (UniqueName: \"kubernetes.io/projected/b0e8860a-03fe-4578-9d36-daca06418649-kube-api-access-pmx45\") pod \"b0e8860a-03fe-4578-9d36-daca06418649\" (UID: \"b0e8860a-03fe-4578-9d36-daca06418649\") "
	Sep 23 12:07:42 ubuntu-20-agent-12 kubelet[1782491]: I0923 12:07:42.914793 1782491 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b0e8860a-03fe-4578-9d36-daca06418649-gcp-creds\") pod \"b0e8860a-03fe-4578-9d36-daca06418649\" (UID: \"b0e8860a-03fe-4578-9d36-daca06418649\") "
	Sep 23 12:07:42 ubuntu-20-agent-12 kubelet[1782491]: I0923 12:07:42.914908 1782491 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0e8860a-03fe-4578-9d36-daca06418649-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "b0e8860a-03fe-4578-9d36-daca06418649" (UID: "b0e8860a-03fe-4578-9d36-daca06418649"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 23 12:07:42 ubuntu-20-agent-12 kubelet[1782491]: I0923 12:07:42.917504 1782491 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0e8860a-03fe-4578-9d36-daca06418649-kube-api-access-pmx45" (OuterVolumeSpecName: "kube-api-access-pmx45") pod "b0e8860a-03fe-4578-9d36-daca06418649" (UID: "b0e8860a-03fe-4578-9d36-daca06418649"). InnerVolumeSpecName "kube-api-access-pmx45". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 12:07:43 ubuntu-20-agent-12 kubelet[1782491]: I0923 12:07:43.016076 1782491 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pmx45\" (UniqueName: \"kubernetes.io/projected/b0e8860a-03fe-4578-9d36-daca06418649-kube-api-access-pmx45\") on node \"ubuntu-20-agent-12\" DevicePath \"\""
	Sep 23 12:07:43 ubuntu-20-agent-12 kubelet[1782491]: I0923 12:07:43.016115 1782491 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b0e8860a-03fe-4578-9d36-daca06418649-gcp-creds\") on node \"ubuntu-20-agent-12\" DevicePath \"\""
	Sep 23 12:07:43 ubuntu-20-agent-12 kubelet[1782491]: I0923 12:07:43.318378 1782491 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pq5w\" (UniqueName: \"kubernetes.io/projected/93925dd9-5243-4f8b-a2b4-53a5b7b50bcb-kube-api-access-2pq5w\") pod \"93925dd9-5243-4f8b-a2b4-53a5b7b50bcb\" (UID: \"93925dd9-5243-4f8b-a2b4-53a5b7b50bcb\") "
	Sep 23 12:07:43 ubuntu-20-agent-12 kubelet[1782491]: I0923 12:07:43.318440 1782491 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4xdt\" (UniqueName: \"kubernetes.io/projected/88691c9c-8d3d-46c2-8169-7df6164800f7-kube-api-access-r4xdt\") pod \"88691c9c-8d3d-46c2-8169-7df6164800f7\" (UID: \"88691c9c-8d3d-46c2-8169-7df6164800f7\") "
	Sep 23 12:07:43 ubuntu-20-agent-12 kubelet[1782491]: I0923 12:07:43.320509 1782491 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88691c9c-8d3d-46c2-8169-7df6164800f7-kube-api-access-r4xdt" (OuterVolumeSpecName: "kube-api-access-r4xdt") pod "88691c9c-8d3d-46c2-8169-7df6164800f7" (UID: "88691c9c-8d3d-46c2-8169-7df6164800f7"). InnerVolumeSpecName "kube-api-access-r4xdt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 12:07:43 ubuntu-20-agent-12 kubelet[1782491]: I0923 12:07:43.320676 1782491 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93925dd9-5243-4f8b-a2b4-53a5b7b50bcb-kube-api-access-2pq5w" (OuterVolumeSpecName: "kube-api-access-2pq5w") pod "93925dd9-5243-4f8b-a2b4-53a5b7b50bcb" (UID: "93925dd9-5243-4f8b-a2b4-53a5b7b50bcb"). InnerVolumeSpecName "kube-api-access-2pq5w". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 12:07:43 ubuntu-20-agent-12 kubelet[1782491]: I0923 12:07:43.419142 1782491 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2pq5w\" (UniqueName: \"kubernetes.io/projected/93925dd9-5243-4f8b-a2b4-53a5b7b50bcb-kube-api-access-2pq5w\") on node \"ubuntu-20-agent-12\" DevicePath \"\""
	Sep 23 12:07:43 ubuntu-20-agent-12 kubelet[1782491]: I0923 12:07:43.419195 1782491 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-r4xdt\" (UniqueName: \"kubernetes.io/projected/88691c9c-8d3d-46c2-8169-7df6164800f7-kube-api-access-r4xdt\") on node \"ubuntu-20-agent-12\" DevicePath \"\""
	Sep 23 12:07:43 ubuntu-20-agent-12 kubelet[1782491]: I0923 12:07:43.473821 1782491 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0e8860a-03fe-4578-9d36-daca06418649" path="/var/lib/kubelet/pods/b0e8860a-03fe-4578-9d36-daca06418649/volumes"
	Sep 23 12:07:43 ubuntu-20-agent-12 kubelet[1782491]: I0923 12:07:43.938589 1782491 scope.go:117] "RemoveContainer" containerID="64574e3a4c3d9f0094e8a5aa93fcaf6eb81eeac4f513178f8bd44064801c6fd1"
	Sep 23 12:07:43 ubuntu-20-agent-12 kubelet[1782491]: I0923 12:07:43.957722 1782491 scope.go:117] "RemoveContainer" containerID="64574e3a4c3d9f0094e8a5aa93fcaf6eb81eeac4f513178f8bd44064801c6fd1"
	Sep 23 12:07:43 ubuntu-20-agent-12 kubelet[1782491]: E0923 12:07:43.958771 1782491 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 64574e3a4c3d9f0094e8a5aa93fcaf6eb81eeac4f513178f8bd44064801c6fd1" containerID="64574e3a4c3d9f0094e8a5aa93fcaf6eb81eeac4f513178f8bd44064801c6fd1"
	Sep 23 12:07:43 ubuntu-20-agent-12 kubelet[1782491]: I0923 12:07:43.958820 1782491 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"64574e3a4c3d9f0094e8a5aa93fcaf6eb81eeac4f513178f8bd44064801c6fd1"} err="failed to get container status \"64574e3a4c3d9f0094e8a5aa93fcaf6eb81eeac4f513178f8bd44064801c6fd1\": rpc error: code = Unknown desc = Error response from daemon: No such container: 64574e3a4c3d9f0094e8a5aa93fcaf6eb81eeac4f513178f8bd44064801c6fd1"
	Sep 23 12:07:43 ubuntu-20-agent-12 kubelet[1782491]: I0923 12:07:43.958850 1782491 scope.go:117] "RemoveContainer" containerID="58c39841a6bac8efd5d11c4a1c09172ee85e6f2620621ea0056acc8a6cc14ee9"
	Sep 23 12:07:43 ubuntu-20-agent-12 kubelet[1782491]: I0923 12:07:43.974912 1782491 scope.go:117] "RemoveContainer" containerID="58c39841a6bac8efd5d11c4a1c09172ee85e6f2620621ea0056acc8a6cc14ee9"
	Sep 23 12:07:43 ubuntu-20-agent-12 kubelet[1782491]: E0923 12:07:43.975976 1782491 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 58c39841a6bac8efd5d11c4a1c09172ee85e6f2620621ea0056acc8a6cc14ee9" containerID="58c39841a6bac8efd5d11c4a1c09172ee85e6f2620621ea0056acc8a6cc14ee9"
	Sep 23 12:07:43 ubuntu-20-agent-12 kubelet[1782491]: I0923 12:07:43.976030 1782491 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"58c39841a6bac8efd5d11c4a1c09172ee85e6f2620621ea0056acc8a6cc14ee9"} err="failed to get container status \"58c39841a6bac8efd5d11c4a1c09172ee85e6f2620621ea0056acc8a6cc14ee9\": rpc error: code = Unknown desc = Error response from daemon: No such container: 58c39841a6bac8efd5d11c4a1c09172ee85e6f2620621ea0056acc8a6cc14ee9"
	
	
	==> storage-provisioner [bb8d6ad9a923] <==
	I0923 11:56:31.305717       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 11:56:31.334928       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 11:56:31.334992       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 11:56:31.350393       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 11:56:31.350645       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-12_9d8a8a35-1a2c-4080-aa30-24f599c63bc8!
	I0923 11:56:31.351010       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a7c2a66e-9110-4b98-8433-2fe6b6e0d6d9", APIVersion:"v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu-20-agent-12_9d8a8a35-1a2c-4080-aa30-24f599c63bc8 became leader
	I0923 11:56:31.451430       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ubuntu-20-agent-12_9d8a8a35-1a2c-4080-aa30-24f599c63bc8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p minikube -n minikube
helpers_test.go:261: (dbg) Run:  kubectl --context minikube get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context minikube describe pod busybox
helpers_test.go:282: (dbg) kubectl --context minikube describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             ubuntu-20-agent-12/10.128.15.239
	Start Time:       Mon, 23 Sep 2024 11:58:30 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vsllp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vsllp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m14s                  default-scheduler  Successfully assigned default/busybox to ubuntu-20-agent-12
	  Normal   Pulling    7m45s (x4 over 9m13s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m45s (x4 over 9m13s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m45s (x4 over 9m13s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m33s (x6 over 9m13s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m3s (x21 over 9m13s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (72.01s)

                                                
                                    

Test pass (104/166)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 1.09
6 TestDownloadOnly/v1.20.0/binaries 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.1/json-events 0.87
15 TestDownloadOnly/v1.31.1/binaries 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.13
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.59
22 TestOffline 39.34
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 101.41
29 TestAddons/serial/Volcano 38.41
31 TestAddons/serial/GCPAuth/Namespaces 0.13
35 TestAddons/parallel/InspektorGadget 10.52
36 TestAddons/parallel/MetricsServer 5.42
38 TestAddons/parallel/CSI 39.51
39 TestAddons/parallel/Headlamp 14.96
40 TestAddons/parallel/CloudSpanner 6.28
42 TestAddons/parallel/NvidiaDevicePlugin 6.26
43 TestAddons/parallel/Yakd 10.46
44 TestAddons/StoppedEnableDisable 10.74
46 TestCertExpiration 227.31
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 26.3
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 28.66
61 TestFunctional/serial/KubeContext 0.05
62 TestFunctional/serial/KubectlGetPods 0.07
64 TestFunctional/serial/MinikubeKubectlCmd 0.12
65 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
66 TestFunctional/serial/ExtraConfig 39.56
67 TestFunctional/serial/ComponentHealth 0.07
68 TestFunctional/serial/LogsCmd 0.9
69 TestFunctional/serial/LogsFileCmd 0.95
70 TestFunctional/serial/InvalidService 4.7
72 TestFunctional/parallel/ConfigCmd 0.32
73 TestFunctional/parallel/DashboardCmd 9.1
74 TestFunctional/parallel/DryRun 0.18
75 TestFunctional/parallel/InternationalLanguage 0.09
76 TestFunctional/parallel/StatusCmd 0.47
79 TestFunctional/parallel/ProfileCmd/profile_not_create 0.24
80 TestFunctional/parallel/ProfileCmd/profile_list 0.23
81 TestFunctional/parallel/ProfileCmd/profile_json_output 0.23
83 TestFunctional/parallel/ServiceCmd/DeployApp 10.16
84 TestFunctional/parallel/ServiceCmd/List 0.41
85 TestFunctional/parallel/ServiceCmd/JSONOutput 0.35
86 TestFunctional/parallel/ServiceCmd/HTTPS 0.18
87 TestFunctional/parallel/ServiceCmd/Format 0.18
88 TestFunctional/parallel/ServiceCmd/URL 0.18
89 TestFunctional/parallel/ServiceCmdConnect 7.34
90 TestFunctional/parallel/AddonsCmd 0.14
91 TestFunctional/parallel/PersistentVolumeClaim 21.9
104 TestFunctional/parallel/MySQL 21.6
108 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
109 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 15.12
110 TestFunctional/parallel/UpdateContextCmd/no_clusters 12.31
113 TestFunctional/parallel/NodeLabels 0.07
117 TestFunctional/parallel/Version/short 0.06
118 TestFunctional/parallel/Version/components 0.26
119 TestFunctional/parallel/License 0.15
120 TestFunctional/delete_echo-server_images 0.03
121 TestFunctional/delete_my-image_image 0.02
122 TestFunctional/delete_minikube_cached_images 0.02
127 TestImageBuild/serial/Setup 14.65
128 TestImageBuild/serial/NormalBuild 0.92
129 TestImageBuild/serial/BuildWithBuildArg 0.65
130 TestImageBuild/serial/BuildWithDockerIgnore 0.41
131 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.41
135 TestJSONOutput/start/Command 31.07
136 TestJSONOutput/start/Audit 0
138 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
139 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
141 TestJSONOutput/pause/Command 0.57
142 TestJSONOutput/pause/Audit 0
144 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
145 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
147 TestJSONOutput/unpause/Command 0.44
148 TestJSONOutput/unpause/Audit 0
150 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
151 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
153 TestJSONOutput/stop/Command 10.47
154 TestJSONOutput/stop/Audit 0
156 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
157 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
158 TestErrorJSONOutput 0.22
163 TestMainNoArgs 0.05
164 TestMinikubeProfile 36.35
172 TestPause/serial/Start 28.44
173 TestPause/serial/SecondStartNoReconfiguration 26.7
174 TestPause/serial/Pause 0.56
175 TestPause/serial/VerifyStatus 0.14
176 TestPause/serial/Unpause 0.43
177 TestPause/serial/PauseAgain 0.56
178 TestPause/serial/DeletePaused 1.94
179 TestPause/serial/VerifyDeletedResources 0.07
193 TestRunningBinaryUpgrade 69.22
195 TestStoppedBinaryUpgrade/Setup 0.55
196 TestStoppedBinaryUpgrade/Upgrade 52.27
197 TestStoppedBinaryUpgrade/MinikubeLogs 0.89
198 TestKubernetesUpgrade 318.1
x
+
TestDownloadOnly/v1.20.0/json-events (1.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=none --bootstrapper=kubeadm: (1.090046919s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (1.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
--- PASS: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (67.953409ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 23 Sep 24 11:55 UTC |          |
	|         | -p minikube --force            |          |         |         |                     |          |
	|         | --alsologtostderr              |          |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |          |
	|         | --container-runtime=docker     |          |         |         |                     |          |
	|         | --driver=none                  |          |         |         |                     |          |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |          |
	|---------|--------------------------------|----------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:55:27
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:55:27.660809 1777193 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:55:27.660947 1777193 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:55:27.660958 1777193 out.go:358] Setting ErrFile to fd 2...
	I0923 11:55:27.660966 1777193 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:55:27.661191 1777193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-1770424/.minikube/bin
	W0923 11:55:27.661342 1777193 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19690-1770424/.minikube/config/config.json: open /home/jenkins/minikube-integration/19690-1770424/.minikube/config/config.json: no such file or directory
	I0923 11:55:27.661931 1777193 out.go:352] Setting JSON to true
	I0923 11:55:27.662944 1777193 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":149879,"bootTime":1726942649,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 11:55:27.663087 1777193 start.go:139] virtualization: kvm guest
	I0923 11:55:27.665349 1777193 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0923 11:55:27.665505 1777193 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19690-1770424/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 11:55:27.665553 1777193 notify.go:220] Checking for updates...
	I0923 11:55:27.666889 1777193 out.go:169] MINIKUBE_LOCATION=19690
	I0923 11:55:27.668220 1777193 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:55:27.669572 1777193 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19690-1770424/kubeconfig
	I0923 11:55:27.671026 1777193 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-1770424/.minikube
	I0923 11:55:27.672311 1777193 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (0.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p minikube --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=none --bootstrapper=kubeadm
--- PASS: TestDownloadOnly/v1.31.1/json-events (0.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
--- PASS: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p minikube: exit status 85 (70.358397ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| Command |              Args              | Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 23 Sep 24 11:55 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	| delete  | --all                          | minikube | jenkins | v1.34.0 | 23 Sep 24 11:55 UTC | 23 Sep 24 11:55 UTC |
	| delete  | -p minikube                    | minikube | jenkins | v1.34.0 | 23 Sep 24 11:55 UTC | 23 Sep 24 11:55 UTC |
	| start   | -o=json --download-only        | minikube | jenkins | v1.34.0 | 23 Sep 24 11:55 UTC |                     |
	|         | -p minikube --force            |          |         |         |                     |                     |
	|         | --alsologtostderr              |          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |          |         |         |                     |                     |
	|         | --container-runtime=docker     |          |         |         |                     |                     |
	|         | --driver=none                  |          |         |         |                     |                     |
	|         | --bootstrapper=kubeadm         |          |         |         |                     |                     |
	|---------|--------------------------------|----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 11:55:29
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 11:55:29.094322 1777345 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:55:29.094451 1777345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:55:29.094460 1777345 out.go:358] Setting ErrFile to fd 2...
	I0923 11:55:29.094467 1777345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:55:29.094699 1777345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-1770424/.minikube/bin
	I0923 11:55:29.095288 1777345 out.go:352] Setting JSON to true
	I0923 11:55:29.096265 1777345 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":149880,"bootTime":1726942649,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 11:55:29.096385 1777345 start.go:139] virtualization: kvm guest
	I0923 11:55:29.098608 1777345 out.go:97] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0923 11:55:29.098750 1777345 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19690-1770424/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 11:55:29.098798 1777345 notify.go:220] Checking for updates...
	I0923 11:55:29.100329 1777345 out.go:169] MINIKUBE_LOCATION=19690
	I0923 11:55:29.101678 1777345 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:55:29.103056 1777345 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19690-1770424/kubeconfig
	I0923 11:55:29.104438 1777345 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-1770424/.minikube
	I0923 11:55:29.105750 1777345 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node minikube host does not exist
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I0923 11:55:30.527431 1777181 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p minikube --alsologtostderr --binary-mirror http://127.0.0.1:43243 --driver=none --bootstrapper=kubeadm
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (39.34s)

                                                
                                                
=== RUN   TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --memory=2048 --wait=true --driver=none --bootstrapper=kubeadm: (37.564870544s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.769852539s)
--- PASS: TestOffline (39.34s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p minikube: exit status 85 (53.465994ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p minikube: exit status 85 (55.260491ms)

                                                
                                                
-- stdout --
	* Profile "minikube" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (101.41s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p minikube --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=none --bootstrapper=kubeadm: (1m41.406480928s)
--- PASS: TestAddons/Setup (101.41s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.41s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 9.41133ms
addons_test.go:835: volcano-scheduler stabilized in 9.452368ms
addons_test.go:843: volcano-admission stabilized in 9.626311ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-7t5hp" [c4b725e6-a52b-4352-a293-7534acd065df] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.005862867s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-ksbx7" [84fd1b1f-bc37-4a3b-8ce9-c3f07c296399] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003782376s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-567wx" [4da3bac1-4f70-48ab-b825-4f967ec1d06d] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.006277619s
addons_test.go:870: (dbg) Run:  kubectl --context minikube delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context minikube create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context minikube get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [396d5e6e-ebc1-469d-a705-4f25f83a96fe] Pending
helpers_test.go:344: "test-job-nginx-0" [396d5e6e-ebc1-469d-a705-4f25f83a96fe] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [396d5e6e-ebc1-469d-a705-4f25f83a96fe] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.004516012s
addons_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable volcano --alsologtostderr -v=1: (10.068509618s)
--- PASS: TestAddons/serial/Volcano (38.41s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context minikube create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context minikube get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.52s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-grxjc" [fba7a151-42b4-4a6b-8826-0ab77cc66393] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004151849s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p minikube: (5.512946966s)
--- PASS: TestAddons/parallel/InspektorGadget (10.52s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.42s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.327101ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-7b8pp" [cf935ea6-5c12-4817-a26f-dda89bea406b] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003594172s
addons_test.go:413: (dbg) Run:  kubectl --context minikube top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.42s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
I0923 12:08:00.379099 1777181 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0923 12:08:00.384985 1777181 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 12:08:00.385013 1777181 kapi.go:107] duration metric: took 5.953021ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 5.963647ms
addons_test.go:508: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [38270543-0e8c-404b-abbc-8b9ad6a82c5a] Pending
helpers_test.go:344: "task-pv-pod" [38270543-0e8c-404b-abbc-8b9ad6a82c5a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [38270543-0e8c-404b-abbc-8b9ad6a82c5a] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004157944s
addons_test.go:528: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context minikube get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context minikube delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context minikube get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context minikube create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [aa226c10-208b-4116-8809-93b23f5badfd] Pending
helpers_test.go:344: "task-pv-pod-restore" [aa226c10-208b-4116-8809-93b23f5badfd] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [aa226c10-208b-4116-8809-93b23f5badfd] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004358386s
addons_test.go:570: (dbg) Run:  kubectl --context minikube delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context minikube delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context minikube delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.337540742s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (39.51s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p minikube --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-dmdwb" [7a8b9e63-da7e-48d5-aa2a-5a6259b37c25] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-dmdwb" [7a8b9e63-da7e-48d5-aa2a-5a6259b37c25] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.004217683s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable headlamp --alsologtostderr -v=1: (5.445951731s)
--- PASS: TestAddons/parallel/Headlamp (14.96s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-rjtsr" [e25e823e-65a5-42e7-8305-a8f498b1f7d9] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003865804s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p minikube
--- PASS: TestAddons/parallel/CloudSpanner (6.28s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-7dk7q" [65dd7ae4-8196-4876-ab03-51353f34538f] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004459965s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p minikube
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-ct5tq" [9e98fd28-0071-4e12-a327-d2bee37cd6dd] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003585488s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p minikube addons disable yakd --alsologtostderr -v=1: (5.455343604s)
--- PASS: TestAddons/parallel/Yakd (10.46s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.74s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.407981575s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p minikube
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p minikube
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p minikube
--- PASS: TestAddons/StoppedEnableDisable (10.74s)

                                                
                                    
x
+
TestCertExpiration (227.31s)

                                                
                                                
=== RUN   TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=3m --driver=none --bootstrapper=kubeadm: (14.527797883s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --cert-expiration=8760h --driver=none --bootstrapper=kubeadm: (30.968127525s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.807255157s)
--- PASS: TestCertExpiration (227.31s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19690-1770424/.minikube/files/etc/test/nested/copy/1777181/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (26.3s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=4000 --apiserver-port=8441 --wait=all --driver=none --bootstrapper=kubeadm: (26.295862765s)
--- PASS: TestFunctional/serial/StartWithProxy (26.30s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.66s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0923 12:13:43.483704 1777181 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=8: (28.65952771s)
functional_test.go:663: soft start took 28.660161295s for "minikube" cluster.
I0923 12:14:12.143571 1777181 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (28.66s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context minikube get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p minikube kubectl -- --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context minikube get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.56s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p minikube --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.561606116s)
functional_test.go:761: restart took 39.561750744s for "minikube" cluster.
I0923 12:14:52.062821 1777181 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (39.56s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context minikube get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs
--- PASS: TestFunctional/serial/LogsCmd (0.90s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p minikube logs --file /tmp/TestFunctionalserialLogsFileCmd2092249962/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.95s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.7s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context minikube apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p minikube
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p minikube: exit status 115 (194.448224ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://10.128.15.239:31509 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context minikube delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context minikube delete -f testdata/invalidsvc.yaml: (1.32231902s)
--- PASS: TestFunctional/serial/InvalidService (4.70s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (51.097741ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p minikube config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube config get cpus: exit status 14 (48.110457ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1]
2024/09/23 12:15:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p minikube --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1811585: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.10s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (93.393996ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-1770424/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-1770424/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the none driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:15:08.144884 1811981 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:15:08.145026 1811981 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:15:08.145035 1811981 out.go:358] Setting ErrFile to fd 2...
	I0923 12:15:08.145040 1811981 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:15:08.145248 1811981 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-1770424/.minikube/bin
	I0923 12:15:08.145884 1811981 out.go:352] Setting JSON to false
	I0923 12:15:08.147065 1811981 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":151059,"bootTime":1726942649,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 12:15:08.147210 1811981 start.go:139] virtualization: kvm guest
	I0923 12:15:08.149916 1811981 out.go:177] * minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0923 12:15:08.151407 1811981 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19690-1770424/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 12:15:08.151458 1811981 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 12:15:08.151516 1811981 notify.go:220] Checking for updates...
	I0923 12:15:08.154371 1811981 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:15:08.155850 1811981 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-1770424/kubeconfig
	I0923 12:15:08.157422 1811981 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-1770424/.minikube
	I0923 12:15:08.158874 1811981 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 12:15:08.160315 1811981 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 12:15:08.162416 1811981 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:15:08.162860 1811981 exec_runner.go:51] Run: systemctl --version
	I0923 12:15:08.166017 1811981 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:15:08.179326 1811981 out.go:177] * Using the none driver based on existing profile
	I0923 12:15:08.180554 1811981 start.go:297] selected driver: none
	I0923 12:15:08.180570 1811981 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.239 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:15:08.180678 1811981 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 12:15:08.180708 1811981 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0923 12:15:08.181127 1811981 out.go:270] ! The 'none' driver does not respect the --memory flag
	! The 'none' driver does not respect the --memory flag
	I0923 12:15:08.183125 1811981 out.go:201] 
	W0923 12:15:08.184525 1811981 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0923 12:15:08.185905 1811981 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
--- PASS: TestFunctional/parallel/DryRun (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --dry-run --memory 250MB --alsologtostderr --driver=none --bootstrapper=kubeadm: exit status 23 (93.887041ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-1770424/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-1770424/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote none basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 12:15:08.331264 1812011 out.go:345] Setting OutFile to fd 1 ...
	I0923 12:15:08.331386 1812011 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:15:08.331396 1812011 out.go:358] Setting ErrFile to fd 2...
	I0923 12:15:08.331400 1812011 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 12:15:08.331701 1812011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19690-1770424/.minikube/bin
	I0923 12:15:08.332369 1812011 out.go:352] Setting JSON to false
	I0923 12:15:08.333622 1812011 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":151059,"bootTime":1726942649,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 12:15:08.333762 1812011 start.go:139] virtualization: kvm guest
	I0923 12:15:08.336178 1812011 out.go:177] * minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	W0923 12:15:08.338070 1812011 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19690-1770424/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 12:15:08.338134 1812011 out.go:177]   - MINIKUBE_LOCATION=19690
	I0923 12:15:08.338129 1812011 notify.go:220] Checking for updates...
	I0923 12:15:08.339836 1812011 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 12:15:08.342419 1812011 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19690-1770424/kubeconfig
	I0923 12:15:08.343634 1812011 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-1770424/.minikube
	I0923 12:15:08.344799 1812011 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 12:15:08.346122 1812011 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 12:15:08.348115 1812011 config.go:182] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 12:15:08.348585 1812011 exec_runner.go:51] Run: systemctl --version
	I0923 12:15:08.351200 1812011 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 12:15:08.361763 1812011 out.go:177] * Utilisation du pilote none basé sur le profil existant
	I0923 12:15:08.363332 1812011 start.go:297] selected driver: none
	I0923 12:15:08.363349 1812011 start.go:901] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.128.15.239 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/h
ome/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 12:15:08.363489 1812011 start.go:912] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 12:15:08.363516 1812011 start.go:1730] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
	W0923 12:15:08.363810 1812011 out.go:270] ! Le pilote 'none' ne respecte pas l'indicateur --memory
	! Le pilote 'none' ne respecte pas l'indicateur --memory
	I0923 12:15:08.365973 1812011 out.go:201] 
	W0923 12:15:08.367512 1812011 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0923 12:15:08.368793 1812011 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p minikube status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p minikube status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "173.657066ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "55.915224ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "177.870689ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "53.170286ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context minikube create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context minikube expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-cw7f8" [f54db47b-b22e-42d6-bb44-cd2e5a01f7c0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-cw7f8" [f54db47b-b22e-42d6-bb44-cd2e5a01f7c0] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003856258s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p minikube service list -o json
functional_test.go:1494: Took "351.81577ms" to run "out/minikube-linux-amd64 -p minikube service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p minikube service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://10.128.15.239:30389
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://10.128.15.239:30389
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context minikube create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context minikube expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-hlccf" [4659ec23-4e90-418c-a308-dafa83f8ac0f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-hlccf" [4659ec23-4e90-418c-a308-dafa83f8ac0f] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004290724s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p minikube service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://10.128.15.239:30849
functional_test.go:1675: http://10.128.15.239:30849: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-hlccf

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://10.128.15.239:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=10.128.15.239:30849
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.34s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p minikube addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (21.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8c66be6f-d0ca-49ad-af90-8b6ca9e51c4f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003744551s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context minikube get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context minikube get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [200bf612-8574-44d7-8dd2-a7cc5c21b6f6] Pending
helpers_test.go:344: "sp-pod" [200bf612-8574-44d7-8dd2-a7cc5c21b6f6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [200bf612-8574-44d7-8dd2-a7cc5c21b6f6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004401693s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context minikube exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context minikube delete -f testdata/storage-provisioner/pod.yaml: (1.122153576s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context minikube apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5a36a29e-151d-491f-aea9-66da32c3a2e0] Pending
helpers_test.go:344: "sp-pod" [5a36a29e-151d-491f-aea9-66da32c3a2e0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5a36a29e-151d-491f-aea9-66da32c3a2e0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003848777s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context minikube exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (21.90s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context minikube replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-krszv" [7ecc6d96-2e5b-43f6-830c-94c5a847d9bf] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-krszv" [7ecc6d96-2e5b-43f6-830c-94c5a847d9bf] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.004264399s
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-krszv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-krszv -- mysql -ppassword -e "show databases;": exit status 1 (127.636973ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 12:16:06.744795 1777181 retry.go:31] will retry after 1.169087082s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-krszv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-krszv -- mysql -ppassword -e "show databases;": exit status 1 (116.935923ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 12:16:08.032112 1777181 retry.go:31] will retry after 1.120046827s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-krszv -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context minikube exec mysql-6cdb49bbb-krszv -- mysql -ppassword -e "show databases;": exit status 1 (121.753045ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 12:16:09.275376 1777181 retry.go:31] will retry after 2.659983222s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context minikube exec mysql-6cdb49bbb-krszv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.60s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (15.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (15.120389398s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (15.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (12.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-linux-amd64 -p minikube update-context --alsologtostderr -v=2: (12.305429011s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (12.31s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context minikube get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p minikube version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p minikube version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.15s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:minikube
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:minikube
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:minikube
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (14.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (14.6521611s)
--- PASS: TestImageBuild/serial/Setup (14.65s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (0.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p minikube
--- PASS: TestImageBuild/serial/NormalBuild (0.92s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p minikube
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.65s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.41s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p minikube
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.41s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.41s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p minikube
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.41s)

                                                
                                    
x
+
TestJSONOutput/start/Command (31.07s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p minikube --output=json --user=testUser --memory=2200 --wait=true --driver=none --bootstrapper=kubeadm: (31.066511251s)
--- PASS: TestJSONOutput/start/Command (31.07s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.44s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.44s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.47s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p minikube --output=json --user=testUser: (10.470583811s)
--- PASS: TestJSONOutput/stop/Command (10.47s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (70.167714ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"33f959c0-6771-4736-a58f-51a661ab74d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0e6421a9-bcbd-4f47-8245-f1bd1e995dab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19690"}}
	{"specversion":"1.0","id":"9f071328-0b17-461c-a6af-bf96ac851580","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4dd6d16d-6e75-4b99-9b38-96586a6431ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19690-1770424/kubeconfig"}}
	{"specversion":"1.0","id":"8aa657d1-b168-4242-a525-3ba2055692dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-1770424/.minikube"}}
	{"specversion":"1.0","id":"3d1bb3be-35e2-4cba-ac07-48a652bf3abe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9b26b9ea-9fb5-41d7-8f97-bc24c79147d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5184e085-334c-4da1-a358-150376987493","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (36.35s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (16.335381894s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p minikube --driver=none --bootstrapper=kubeadm: (18.003494351s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile minikube
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.363897276s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- PASS: TestMinikubeProfile (36.35s)

                                                
                                    
x
+
TestPause/serial/Start (28.44s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2048 --install-addons=false --wait=all --driver=none --bootstrapper=kubeadm: (28.442868127s)
--- PASS: TestPause/serial/Start (28.44s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (26.7s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p minikube --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (26.704536025s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (26.70s)

                                                
                                    
x
+
TestPause/serial/Pause (0.56s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.56s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.14s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p minikube --output=json --layout=cluster: exit status 2 (143.103428ms)

                                                
                                                
-- stdout --
	{"Name":"minikube","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"minikube","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.14s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.43s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.43s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.56s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p minikube --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.56s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.94s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p minikube --alsologtostderr -v=5: (1.936286228s)
--- PASS: TestPause/serial/DeletePaused (1.94s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.07s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.07s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (69.22s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3472173203 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3472173203 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (27.930413617s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (37.333367744s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (3.471022724s)
--- PASS: TestRunningBinaryUpgrade (69.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (52.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1519595713 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1519595713 start -p minikube --memory=2200 --vm-driver=none --bootstrapper=kubeadm: (15.115361169s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1519595713 -p minikube stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1519595713 -p minikube stop: (23.660488011s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (13.490975409s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (52.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p minikube
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                    
x
+
TestKubernetesUpgrade (318.1s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (29.276421193s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p minikube
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p minikube: (10.357359286s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p minikube status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p minikube status --format={{.Host}}: exit status 7 (83.995897ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (4m18.31161296s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context minikube version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.20.0 --driver=none --bootstrapper=kubeadm: exit status 106 (93.751314ms)

                                                
                                                
-- stdout --
	* minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19690
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19690-1770424/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19690-1770424/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete
	    minikube start --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p minikube2 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p minikube --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=none --bootstrapper=kubeadm: (18.481521488s)
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p minikube: (1.429455294s)
--- PASS: TestKubernetesUpgrade (318.10s)

                                                
                                    

Test skip (61/166)

Order skiped test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
5 TestDownloadOnly/v1.20.0/cached-images 0
7 TestDownloadOnly/v1.20.0/kubectl 0
13 TestDownloadOnly/v1.31.1/preload-exists 0
14 TestDownloadOnly/v1.31.1/cached-images 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
34 TestAddons/parallel/Ingress 0
37 TestAddons/parallel/Olm 0
41 TestAddons/parallel/LocalPath 0
45 TestCertOptions 0
47 TestDockerFlags 0
48 TestForceSystemdFlag 0
49 TestForceSystemdEnv 0
50 TestDockerEnvContainerd 0
51 TestKVMDriverInstallOrUpdate 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
54 TestErrorSpam 0
63 TestFunctional/serial/CacheCmd 0
77 TestFunctional/parallel/MountCmd 0
94 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
95 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
96 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
97 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
98 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
99 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
100 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
101 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
102 TestFunctional/parallel/SSHCmd 0
103 TestFunctional/parallel/CpCmd 0
105 TestFunctional/parallel/FileSync 0
106 TestFunctional/parallel/CertSync 0
111 TestFunctional/parallel/DockerEnv 0
112 TestFunctional/parallel/PodmanEnv 0
114 TestFunctional/parallel/ImageCommands 0
115 TestFunctional/parallel/NonActiveRuntimeDisabled 0
123 TestGvisorAddon 0
124 TestMultiControlPlane 0
132 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
159 TestKicCustomNetwork 0
160 TestKicExistingNetwork 0
161 TestKicCustomSubnet 0
162 TestKicStaticIP 0
165 TestMountStart 0
166 TestMultiNode 0
167 TestNetworkPlugins 0
168 TestNoKubernetes 0
169 TestChangeNoneUser 0
180 TestPreload 0
181 TestScheduledStopWindows 0
182 TestScheduledStopUnix 0
183 TestSkaffold 0
186 TestStartStop/group/old-k8s-version 0.14
187 TestStartStop/group/newest-cni 0.14
188 TestStartStop/group/default-k8s-diff-port 0.14
189 TestStartStop/group/no-preload 0.15
190 TestStartStop/group/disable-driver-mounts 0.14
191 TestStartStop/group/embed-certs 0.14
192 TestInsufficientStorage 0
199 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
aaa_download_only_test.go:109: None driver does not have preload
--- SKIP: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:126: None driver has no cache
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
addons_test.go:194: skipping: ingress not supported
--- SKIP: TestAddons/parallel/Ingress (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
addons_test.go:916: skip local-path test on none driver
--- SKIP: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (0s)

                                                
                                                
=== RUN   TestCertOptions
cert_options_test.go:34: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestCertOptions (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:38: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestForceSystemdFlag (0s)

                                                
                                                
=== RUN   TestForceSystemdFlag
docker_test.go:81: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdFlag (0.00s)

                                                
                                    
x
+
TestForceSystemdEnv (0s)

                                                
                                                
=== RUN   TestForceSystemdEnv
docker_test.go:144: skipping: none driver does not support ssh or bundle docker
--- SKIP: TestForceSystemdEnv (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip none driver.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestErrorSpam (0s)

                                                
                                                
=== RUN   TestErrorSpam
error_spam_test.go:63: none driver always shows a warning
--- SKIP: TestErrorSpam (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd
functional_test.go:1041: skipping: cache unsupported by none
--- SKIP: TestFunctional/serial/CacheCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
functional_test_mount_test.go:54: skipping: none driver does not support mount
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
functional_test.go:1717: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/SSHCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
functional_test.go:1760: skipping: cp is unsupported by none driver
--- SKIP: TestFunctional/parallel/CpCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
functional_test.go:1924: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/FileSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
functional_test.go:1955: skipping: ssh unsupported by none
--- SKIP: TestFunctional/parallel/CertSync (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
functional_test.go:458: none driver does not support docker-env
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
functional_test.go:545: none driver does not support podman-env
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands
functional_test.go:292: image commands are not available on the none driver
--- SKIP: TestFunctional/parallel/ImageCommands (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2016: skipping on none driver, minikube does not control the runtime of user on the none driver.
--- SKIP: TestFunctional/parallel/NonActiveRuntimeDisabled (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:31: Can't run containerd backend with none driver
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestMultiControlPlane (0s)

                                                
                                                
=== RUN   TestMultiControlPlane
ha_test.go:41: none driver does not support multinode/ha(multi-control plane) cluster
--- SKIP: TestMultiControlPlane (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestMountStart (0s)

                                                
                                                
=== RUN   TestMountStart
mount_start_test.go:46: skipping: none driver does not support mount
--- SKIP: TestMountStart (0.00s)

                                                
                                    
x
+
TestMultiNode (0s)

                                                
                                                
=== RUN   TestMultiNode
multinode_test.go:41: none driver does not support multinode
--- SKIP: TestMultiNode (0.00s)

                                                
                                    
x
+
TestNetworkPlugins (0s)

                                                
                                                
=== RUN   TestNetworkPlugins
net_test.go:49: skipping since test for none driver
--- SKIP: TestNetworkPlugins (0.00s)

                                                
                                    
x
+
TestNoKubernetes (0s)

                                                
                                                
=== RUN   TestNoKubernetes
no_kubernetes_test.go:36: None driver does not need --no-kubernetes test
--- SKIP: TestNoKubernetes (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestPreload (0s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:32: skipping TestPreload - incompatible with none driver
--- SKIP: TestPreload (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:79: --schedule does not work with the none driver
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: none driver doesn't support `minikube docker-env`; skaffold depends on this command
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version
start_stop_delete_test.go:100: skipping TestStartStop/group/old-k8s-version - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/old-k8s-version (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni
start_stop_delete_test.go:100: skipping TestStartStop/group/newest-cni - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/newest-cni (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port
start_stop_delete_test.go:100: skipping TestStartStop/group/default-k8s-diff-port - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/default-k8s-diff-port (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload
start_stop_delete_test.go:100: skipping TestStartStop/group/no-preload - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/no-preload (0.15s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:100: skipping TestStartStop/group/disable-driver-mounts - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs
start_stop_delete_test.go:100: skipping TestStartStop/group/embed-certs - incompatible with none driver
helpers_test.go:175: Cleaning up "minikube" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p minikube
--- SKIP: TestStartStop/group/embed-certs (0.14s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard