=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run: kubectl --context addons-699562 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run: kubectl --context addons-699562 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run: kubectl --context addons-699562 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [22eac9e0-47f1-46a1-9745-87ca515de64e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [22eac9e0-47f1-46a1-9745-87ca515de64e] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004672157s
addons_test.go:264: (dbg) Run: out/minikube-linux-amd64 -p addons-699562 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-699562 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.15892694s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run: kubectl --context addons-699562 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run: out/minikube-linux-amd64 -p addons-699562 ip
addons_test.go:299: (dbg) Run: nslookup hello-john.test 192.168.39.241
addons_test.go:308: (dbg) Run: out/minikube-linux-amd64 -p addons-699562 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-699562 addons disable ingress-dns --alsologtostderr -v=1: (2.261683793s)
addons_test.go:313: (dbg) Run: out/minikube-linux-amd64 -p addons-699562 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-699562 addons disable ingress --alsologtostderr -v=1: (7.686695231s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-699562 -n addons-699562
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p addons-699562 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-699562 logs -n 25: (1.282446529s)
helpers_test.go:252: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start | -o=json --download-only | download-only-640021 | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC | |
| | -p download-only-640021 | | | | | |
| | --force --alsologtostderr | | | | | |
| | --kubernetes-version=v1.30.1 | | | | | |
| | --container-runtime=crio | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| delete | --all | minikube | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC | 03 Jun 24 12:24 UTC |
| delete | -p download-only-640021 | download-only-640021 | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC | 03 Jun 24 12:24 UTC |
| delete | -p download-only-979896 | download-only-979896 | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC | 03 Jun 24 12:24 UTC |
| delete | -p download-only-640021 | download-only-640021 | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC | 03 Jun 24 12:24 UTC |
| start | --download-only -p | binary-mirror-778765 | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC | |
| | binary-mirror-778765 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:35769 | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| delete | -p binary-mirror-778765 | binary-mirror-778765 | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC | 03 Jun 24 12:24 UTC |
| addons | enable dashboard -p | addons-699562 | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC | |
| | addons-699562 | | | | | |
| addons | disable dashboard -p | addons-699562 | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC | |
| | addons-699562 | | | | | |
| start | -p addons-699562 --wait=true | addons-699562 | jenkins | v1.33.1 | 03 Jun 24 12:24 UTC | 03 Jun 24 12:27 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| | --addons=helm-tiller | | | | | |
| addons | enable headlamp | addons-699562 | jenkins | v1.33.1 | 03 Jun 24 12:27 UTC | 03 Jun 24 12:27 UTC |
| | -p addons-699562 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-699562 addons disable | addons-699562 | jenkins | v1.33.1 | 03 Jun 24 12:27 UTC | 03 Jun 24 12:27 UTC |
| | helm-tiller --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| ssh | addons-699562 ssh cat | addons-699562 | jenkins | v1.33.1 | 03 Jun 24 12:27 UTC | 03 Jun 24 12:27 UTC |
| | /opt/local-path-provisioner/pvc-322948b5-f737-472a-a023-d147f813616b_default_test-pvc/file1 | | | | | |
| addons | addons-699562 addons disable | addons-699562 | jenkins | v1.33.1 | 03 Jun 24 12:27 UTC | |
| | storage-provisioner-rancher | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ip | addons-699562 ip | addons-699562 | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC | 03 Jun 24 12:28 UTC |
| addons | addons-699562 addons disable | addons-699562 | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC | 03 Jun 24 12:28 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | disable inspektor-gadget -p | addons-699562 | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC | 03 Jun 24 12:28 UTC |
| | addons-699562 | | | | | |
| addons | disable cloud-spanner -p | addons-699562 | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC | 03 Jun 24 12:28 UTC |
| | addons-699562 | | | | | |
| ssh | addons-699562 ssh curl -s | addons-699562 | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC | |
| | http://127.0.0.1/ -H 'Host: | | | | | |
| | nginx.example.com' | | | | | |
| addons | disable nvidia-device-plugin | addons-699562 | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC | 03 Jun 24 12:28 UTC |
| | -p addons-699562 | | | | | |
| addons | addons-699562 addons | addons-699562 | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC | 03 Jun 24 12:28 UTC |
| | disable csi-hostpath-driver | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-699562 addons | addons-699562 | jenkins | v1.33.1 | 03 Jun 24 12:28 UTC | 03 Jun 24 12:28 UTC |
| | disable volumesnapshots | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ip | addons-699562 ip | addons-699562 | jenkins | v1.33.1 | 03 Jun 24 12:30 UTC | 03 Jun 24 12:30 UTC |
| addons | addons-699562 addons disable | addons-699562 | jenkins | v1.33.1 | 03 Jun 24 12:30 UTC | 03 Jun 24 12:30 UTC |
| | ingress-dns --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | addons-699562 addons disable | addons-699562 | jenkins | v1.33.1 | 03 Jun 24 12:30 UTC | 03 Jun 24 12:30 UTC |
| | ingress --alsologtostderr -v=1 | | | | | |
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/06/03 12:24:24
Running on machine: ubuntu-20-agent-15
Binary: Built with gc go1.22.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0603 12:24:24.395017 1086826 out.go:291] Setting OutFile to fd 1 ...
I0603 12:24:24.395285 1086826 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 12:24:24.395295 1086826 out.go:304] Setting ErrFile to fd 2...
I0603 12:24:24.395299 1086826 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 12:24:24.395564 1086826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19011-1078924/.minikube/bin
I0603 12:24:24.396217 1086826 out.go:298] Setting JSON to false
I0603 12:24:24.397840 1086826 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11211,"bootTime":1717406253,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I0603 12:24:24.397980 1086826 start.go:139] virtualization: kvm guest
I0603 12:24:24.400088 1086826 out.go:177] * [addons-699562] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
I0603 12:24:24.401631 1086826 out.go:177] - MINIKUBE_LOCATION=19011
I0603 12:24:24.401592 1086826 notify.go:220] Checking for updates...
I0603 12:24:24.403113 1086826 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0603 12:24:24.404638 1086826 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19011-1078924/kubeconfig
I0603 12:24:24.406028 1086826 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19011-1078924/.minikube
I0603 12:24:24.407381 1086826 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I0603 12:24:24.408703 1086826 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0603 12:24:24.410378 1086826 driver.go:392] Setting default libvirt URI to qemu:///system
I0603 12:24:24.441869 1086826 out.go:177] * Using the kvm2 driver based on user configuration
I0603 12:24:24.443443 1086826 start.go:297] selected driver: kvm2
I0603 12:24:24.443462 1086826 start.go:901] validating driver "kvm2" against <nil>
I0603 12:24:24.443474 1086826 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0603 12:24:24.444153 1086826 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0603 12:24:24.444232 1086826 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19011-1078924/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0603 12:24:24.459337 1086826 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
I0603 12:24:24.459391 1086826 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0603 12:24:24.459645 1086826 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0603 12:24:24.459722 1086826 cni.go:84] Creating CNI manager for ""
I0603 12:24:24.459739 1086826 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I0603 12:24:24.459752 1086826 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0603 12:24:24.459835 1086826 start.go:340] cluster config:
{Name:addons-699562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-699562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
I0603 12:24:24.459949 1086826 iso.go:125] acquiring lock: {Name:mka26d6a83f88b83737ccc78b57cc462fbe70fe1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0603 12:24:24.461749 1086826 out.go:177] * Starting "addons-699562" primary control-plane node in "addons-699562" cluster
I0603 12:24:24.462982 1086826 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
I0603 12:24:24.463022 1086826 preload.go:147] Found local preload: /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
I0603 12:24:24.463036 1086826 cache.go:56] Caching tarball of preloaded images
I0603 12:24:24.463123 1086826 preload.go:173] Found /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I0603 12:24:24.463134 1086826 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
I0603 12:24:24.463498 1086826 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/config.json ...
I0603 12:24:24.463531 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/config.json: {Name:mka3fc11f119399ce4f1970b76b906c714896655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0603 12:24:24.463710 1086826 start.go:360] acquireMachinesLock for addons-699562: {Name:mk20baaab39609d00406b78ad309423511e633ec Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0603 12:24:24.463793 1086826 start.go:364] duration metric: took 57.075µs to acquireMachinesLock for "addons-699562"
I0603 12:24:24.463819 1086826 start.go:93] Provisioning new machine with config: &{Name:addons-699562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:addons-699562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I0603 12:24:24.463894 1086826 start.go:125] createHost starting for "" (driver="kvm2")
I0603 12:24:24.465633 1086826 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
I0603 12:24:24.465788 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:24:24.465842 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:24:24.480246 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33161
I0603 12:24:24.480702 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:24:24.481265 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:24:24.481286 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:24:24.481607 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:24:24.481786 1086826 main.go:141] libmachine: (addons-699562) Calling .GetMachineName
I0603 12:24:24.481950 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
I0603 12:24:24.482080 1086826 start.go:159] libmachine.API.Create for "addons-699562" (driver="kvm2")
I0603 12:24:24.482118 1086826 client.go:168] LocalClient.Create starting
I0603 12:24:24.482153 1086826 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem
I0603 12:24:24.830722 1086826 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem
I0603 12:24:25.061334 1086826 main.go:141] libmachine: Running pre-create checks...
I0603 12:24:25.061363 1086826 main.go:141] libmachine: (addons-699562) Calling .PreCreateCheck
I0603 12:24:25.061875 1086826 main.go:141] libmachine: (addons-699562) Calling .GetConfigRaw
I0603 12:24:25.062377 1086826 main.go:141] libmachine: Creating machine...
I0603 12:24:25.062395 1086826 main.go:141] libmachine: (addons-699562) Calling .Create
I0603 12:24:25.062542 1086826 main.go:141] libmachine: (addons-699562) Creating KVM machine...
I0603 12:24:25.063695 1086826 main.go:141] libmachine: (addons-699562) DBG | found existing default KVM network
I0603 12:24:25.064418 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:25.064281 1086848 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015340}
I0603 12:24:25.064476 1086826 main.go:141] libmachine: (addons-699562) DBG | created network xml:
I0603 12:24:25.064498 1086826 main.go:141] libmachine: (addons-699562) DBG | <network>
I0603 12:24:25.064510 1086826 main.go:141] libmachine: (addons-699562) DBG | <name>mk-addons-699562</name>
I0603 12:24:25.064522 1086826 main.go:141] libmachine: (addons-699562) DBG | <dns enable='no'/>
I0603 12:24:25.064532 1086826 main.go:141] libmachine: (addons-699562) DBG |
I0603 12:24:25.064546 1086826 main.go:141] libmachine: (addons-699562) DBG | <ip address='192.168.39.1' netmask='255.255.255.0'>
I0603 12:24:25.064557 1086826 main.go:141] libmachine: (addons-699562) DBG | <dhcp>
I0603 12:24:25.064570 1086826 main.go:141] libmachine: (addons-699562) DBG | <range start='192.168.39.2' end='192.168.39.253'/>
I0603 12:24:25.064619 1086826 main.go:141] libmachine: (addons-699562) DBG | </dhcp>
I0603 12:24:25.064645 1086826 main.go:141] libmachine: (addons-699562) DBG | </ip>
I0603 12:24:25.064652 1086826 main.go:141] libmachine: (addons-699562) DBG |
I0603 12:24:25.064657 1086826 main.go:141] libmachine: (addons-699562) DBG | </network>
I0603 12:24:25.064665 1086826 main.go:141] libmachine: (addons-699562) DBG |
I0603 12:24:25.069891 1086826 main.go:141] libmachine: (addons-699562) DBG | trying to create private KVM network mk-addons-699562 192.168.39.0/24...
I0603 12:24:25.134687 1086826 main.go:141] libmachine: (addons-699562) DBG | private KVM network mk-addons-699562 192.168.39.0/24 created
I0603 12:24:25.134727 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:25.134642 1086848 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19011-1078924/.minikube
I0603 12:24:25.134742 1086826 main.go:141] libmachine: (addons-699562) Setting up store path in /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562 ...
I0603 12:24:25.134761 1086826 main.go:141] libmachine: (addons-699562) Building disk image from file:///home/jenkins/minikube-integration/19011-1078924/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
I0603 12:24:25.134778 1086826 main.go:141] libmachine: (addons-699562) Downloading /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19011-1078924/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
I0603 12:24:25.382531 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:25.382373 1086848 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa...
I0603 12:24:25.538612 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:25.538462 1086848 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/addons-699562.rawdisk...
I0603 12:24:25.538652 1086826 main.go:141] libmachine: (addons-699562) DBG | Writing magic tar header
I0603 12:24:25.538667 1086826 main.go:141] libmachine: (addons-699562) DBG | Writing SSH key tar header
I0603 12:24:25.538682 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:25.538619 1086848 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562 ...
I0603 12:24:25.538813 1086826 main.go:141] libmachine: (addons-699562) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562 (perms=drwx------)
I0603 12:24:25.538861 1086826 main.go:141] libmachine: (addons-699562) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562
I0603 12:24:25.538880 1086826 main.go:141] libmachine: (addons-699562) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube/machines (perms=drwxr-xr-x)
I0603 12:24:25.538893 1086826 main.go:141] libmachine: (addons-699562) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924/.minikube (perms=drwxr-xr-x)
I0603 12:24:25.538899 1086826 main.go:141] libmachine: (addons-699562) Setting executable bit set on /home/jenkins/minikube-integration/19011-1078924 (perms=drwxrwxr-x)
I0603 12:24:25.538906 1086826 main.go:141] libmachine: (addons-699562) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I0603 12:24:25.538917 1086826 main.go:141] libmachine: (addons-699562) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I0603 12:24:25.538929 1086826 main.go:141] libmachine: (addons-699562) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines
I0603 12:24:25.538943 1086826 main.go:141] libmachine: (addons-699562) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924/.minikube
I0603 12:24:25.538956 1086826 main.go:141] libmachine: (addons-699562) Creating domain...
I0603 12:24:25.538971 1086826 main.go:141] libmachine: (addons-699562) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19011-1078924
I0603 12:24:25.538983 1086826 main.go:141] libmachine: (addons-699562) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
I0603 12:24:25.538990 1086826 main.go:141] libmachine: (addons-699562) DBG | Checking permissions on dir: /home/jenkins
I0603 12:24:25.538995 1086826 main.go:141] libmachine: (addons-699562) DBG | Checking permissions on dir: /home
I0603 12:24:25.539005 1086826 main.go:141] libmachine: (addons-699562) DBG | Skipping /home - not owner
I0603 12:24:25.539894 1086826 main.go:141] libmachine: (addons-699562) define libvirt domain using xml:
I0603 12:24:25.539914 1086826 main.go:141] libmachine: (addons-699562) <domain type='kvm'>
I0603 12:24:25.539924 1086826 main.go:141] libmachine: (addons-699562) <name>addons-699562</name>
I0603 12:24:25.539931 1086826 main.go:141] libmachine: (addons-699562) <memory unit='MiB'>4000</memory>
I0603 12:24:25.539939 1086826 main.go:141] libmachine: (addons-699562) <vcpu>2</vcpu>
I0603 12:24:25.539949 1086826 main.go:141] libmachine: (addons-699562) <features>
I0603 12:24:25.539955 1086826 main.go:141] libmachine: (addons-699562) <acpi/>
I0603 12:24:25.539961 1086826 main.go:141] libmachine: (addons-699562) <apic/>
I0603 12:24:25.539966 1086826 main.go:141] libmachine: (addons-699562) <pae/>
I0603 12:24:25.539970 1086826 main.go:141] libmachine: (addons-699562)
I0603 12:24:25.539977 1086826 main.go:141] libmachine: (addons-699562) </features>
I0603 12:24:25.539982 1086826 main.go:141] libmachine: (addons-699562) <cpu mode='host-passthrough'>
I0603 12:24:25.539992 1086826 main.go:141] libmachine: (addons-699562)
I0603 12:24:25.540021 1086826 main.go:141] libmachine: (addons-699562) </cpu>
I0603 12:24:25.540040 1086826 main.go:141] libmachine: (addons-699562) <os>
I0603 12:24:25.540047 1086826 main.go:141] libmachine: (addons-699562) <type>hvm</type>
I0603 12:24:25.540051 1086826 main.go:141] libmachine: (addons-699562) <boot dev='cdrom'/>
I0603 12:24:25.540056 1086826 main.go:141] libmachine: (addons-699562) <boot dev='hd'/>
I0603 12:24:25.540063 1086826 main.go:141] libmachine: (addons-699562) <bootmenu enable='no'/>
I0603 12:24:25.540068 1086826 main.go:141] libmachine: (addons-699562) </os>
I0603 12:24:25.540072 1086826 main.go:141] libmachine: (addons-699562) <devices>
I0603 12:24:25.540080 1086826 main.go:141] libmachine: (addons-699562) <disk type='file' device='cdrom'>
I0603 12:24:25.540089 1086826 main.go:141] libmachine: (addons-699562) <source file='/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/boot2docker.iso'/>
I0603 12:24:25.540099 1086826 main.go:141] libmachine: (addons-699562) <target dev='hdc' bus='scsi'/>
I0603 12:24:25.540109 1086826 main.go:141] libmachine: (addons-699562) <readonly/>
I0603 12:24:25.540135 1086826 main.go:141] libmachine: (addons-699562) </disk>
I0603 12:24:25.540160 1086826 main.go:141] libmachine: (addons-699562) <disk type='file' device='disk'>
I0603 12:24:25.540175 1086826 main.go:141] libmachine: (addons-699562) <driver name='qemu' type='raw' cache='default' io='threads' />
I0603 12:24:25.540192 1086826 main.go:141] libmachine: (addons-699562) <source file='/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/addons-699562.rawdisk'/>
I0603 12:24:25.540205 1086826 main.go:141] libmachine: (addons-699562) <target dev='hda' bus='virtio'/>
I0603 12:24:25.540215 1086826 main.go:141] libmachine: (addons-699562) </disk>
I0603 12:24:25.540222 1086826 main.go:141] libmachine: (addons-699562) <interface type='network'>
I0603 12:24:25.540238 1086826 main.go:141] libmachine: (addons-699562) <source network='mk-addons-699562'/>
I0603 12:24:25.540251 1086826 main.go:141] libmachine: (addons-699562) <model type='virtio'/>
I0603 12:24:25.540261 1086826 main.go:141] libmachine: (addons-699562) </interface>
I0603 12:24:25.540273 1086826 main.go:141] libmachine: (addons-699562) <interface type='network'>
I0603 12:24:25.540288 1086826 main.go:141] libmachine: (addons-699562) <source network='default'/>
I0603 12:24:25.540297 1086826 main.go:141] libmachine: (addons-699562) <model type='virtio'/>
I0603 12:24:25.540306 1086826 main.go:141] libmachine: (addons-699562) </interface>
I0603 12:24:25.540313 1086826 main.go:141] libmachine: (addons-699562) <serial type='pty'>
I0603 12:24:25.540323 1086826 main.go:141] libmachine: (addons-699562) <target port='0'/>
I0603 12:24:25.540336 1086826 main.go:141] libmachine: (addons-699562) </serial>
I0603 12:24:25.540346 1086826 main.go:141] libmachine: (addons-699562) <console type='pty'>
I0603 12:24:25.540358 1086826 main.go:141] libmachine: (addons-699562) <target type='serial' port='0'/>
I0603 12:24:25.540372 1086826 main.go:141] libmachine: (addons-699562) </console>
I0603 12:24:25.540383 1086826 main.go:141] libmachine: (addons-699562) <rng model='virtio'>
I0603 12:24:25.540394 1086826 main.go:141] libmachine: (addons-699562) <backend model='random'>/dev/random</backend>
I0603 12:24:25.540400 1086826 main.go:141] libmachine: (addons-699562) </rng>
I0603 12:24:25.540407 1086826 main.go:141] libmachine: (addons-699562)
I0603 12:24:25.540420 1086826 main.go:141] libmachine: (addons-699562)
I0603 12:24:25.540427 1086826 main.go:141] libmachine: (addons-699562) </devices>
I0603 12:24:25.540450 1086826 main.go:141] libmachine: (addons-699562) </domain>
I0603 12:24:25.540473 1086826 main.go:141] libmachine: (addons-699562)
I0603 12:24:25.546035 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:52:26:8d in network default
I0603 12:24:25.546507 1086826 main.go:141] libmachine: (addons-699562) Ensuring networks are active...
I0603 12:24:25.546533 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:25.547150 1086826 main.go:141] libmachine: (addons-699562) Ensuring network default is active
I0603 12:24:25.547454 1086826 main.go:141] libmachine: (addons-699562) Ensuring network mk-addons-699562 is active
I0603 12:24:25.547879 1086826 main.go:141] libmachine: (addons-699562) Getting domain xml...
I0603 12:24:25.548531 1086826 main.go:141] libmachine: (addons-699562) Creating domain...
I0603 12:24:26.908511 1086826 main.go:141] libmachine: (addons-699562) Waiting to get IP...
I0603 12:24:26.909158 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:26.909617 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
I0603 12:24:26.909662 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:26.909614 1086848 retry.go:31] will retry after 278.583828ms: waiting for machine to come up
I0603 12:24:27.190168 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:27.190625 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
I0603 12:24:27.190656 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:27.190586 1086848 retry.go:31] will retry after 372.5456ms: waiting for machine to come up
I0603 12:24:27.565372 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:27.565870 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
I0603 12:24:27.565914 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:27.565824 1086848 retry.go:31] will retry after 296.896127ms: waiting for machine to come up
I0603 12:24:27.864373 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:27.864848 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
I0603 12:24:27.864874 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:27.864792 1086848 retry.go:31] will retry after 404.252126ms: waiting for machine to come up
I0603 12:24:28.270290 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:28.270670 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
I0603 12:24:28.270696 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:28.270636 1086848 retry.go:31] will retry after 599.58078ms: waiting for machine to come up
I0603 12:24:28.871331 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:28.871741 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
I0603 12:24:28.871765 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:28.871690 1086848 retry.go:31] will retry after 952.068344ms: waiting for machine to come up
I0603 12:24:29.825179 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:29.825523 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
I0603 12:24:29.825588 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:29.825498 1086848 retry.go:31] will retry after 1.104687103s: waiting for machine to come up
I0603 12:24:30.931756 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:30.932080 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
I0603 12:24:30.932117 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:30.932013 1086848 retry.go:31] will retry after 1.141640091s: waiting for machine to come up
I0603 12:24:32.075239 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:32.075624 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
I0603 12:24:32.075650 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:32.075551 1086848 retry.go:31] will retry after 1.323363823s: waiting for machine to come up
I0603 12:24:33.401067 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:33.401447 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
I0603 12:24:33.401478 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:33.401379 1086848 retry.go:31] will retry after 1.79959901s: waiting for machine to come up
I0603 12:24:35.202394 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:35.202849 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
I0603 12:24:35.202881 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:35.202784 1086848 retry.go:31] will retry after 2.402984849s: waiting for machine to come up
I0603 12:24:37.608253 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:37.608533 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
I0603 12:24:37.608549 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:37.608522 1086848 retry.go:31] will retry after 3.335405184s: waiting for machine to come up
I0603 12:24:40.945518 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:40.945934 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
I0603 12:24:40.945954 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:40.945909 1086848 retry.go:31] will retry after 3.713074283s: waiting for machine to come up
I0603 12:24:44.660565 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:44.661082 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find current IP address of domain addons-699562 in network mk-addons-699562
I0603 12:24:44.661109 1086826 main.go:141] libmachine: (addons-699562) DBG | I0603 12:24:44.661034 1086848 retry.go:31] will retry after 5.622787495s: waiting for machine to come up
I0603 12:24:50.285257 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:50.285752 1086826 main.go:141] libmachine: (addons-699562) Found IP for machine: 192.168.39.241
I0603 12:24:50.285779 1086826 main.go:141] libmachine: (addons-699562) Reserving static IP address...
I0603 12:24:50.285794 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has current primary IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:50.286188 1086826 main.go:141] libmachine: (addons-699562) DBG | unable to find host DHCP lease matching {name: "addons-699562", mac: "52:54:00:d2:ff:f6", ip: "192.168.39.241"} in network mk-addons-699562
I0603 12:24:50.392337 1086826 main.go:141] libmachine: (addons-699562) DBG | Getting to WaitForSSH function...
I0603 12:24:50.392376 1086826 main.go:141] libmachine: (addons-699562) Reserved static IP address: 192.168.39.241
I0603 12:24:50.392391 1086826 main.go:141] libmachine: (addons-699562) Waiting for SSH to be available...
I0603 12:24:50.394776 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:50.395257 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d2:ff:f6}
I0603 12:24:50.395285 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:50.395509 1086826 main.go:141] libmachine: (addons-699562) DBG | Using SSH client type: external
I0603 12:24:50.395533 1086826 main.go:141] libmachine: (addons-699562) DBG | Using SSH private key: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa (-rw-------)
I0603 12:24:50.395566 1086826 main.go:141] libmachine: (addons-699562) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.241 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa -p 22] /usr/bin/ssh <nil>}
I0603 12:24:50.395588 1086826 main.go:141] libmachine: (addons-699562) DBG | About to run SSH command:
I0603 12:24:50.395604 1086826 main.go:141] libmachine: (addons-699562) DBG | exit 0
I0603 12:24:50.517849 1086826 main.go:141] libmachine: (addons-699562) DBG | SSH cmd err, output: <nil>:
I0603 12:24:50.518082 1086826 main.go:141] libmachine: (addons-699562) KVM machine creation complete!
I0603 12:24:50.518448 1086826 main.go:141] libmachine: (addons-699562) Calling .GetConfigRaw
I0603 12:24:50.550682 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
I0603 12:24:50.551029 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
I0603 12:24:50.551273 1086826 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I0603 12:24:50.551288 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
I0603 12:24:50.552696 1086826 main.go:141] libmachine: Detecting operating system of created instance...
I0603 12:24:50.552714 1086826 main.go:141] libmachine: Waiting for SSH to be available...
I0603 12:24:50.552722 1086826 main.go:141] libmachine: Getting to WaitForSSH function...
I0603 12:24:50.552730 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
I0603 12:24:50.554915 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:50.555224 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:24:50.555250 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:50.555415 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
I0603 12:24:50.555599 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:24:50.555761 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:24:50.555931 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
I0603 12:24:50.556114 1086826 main.go:141] libmachine: Using SSH client type: native
I0603 12:24:50.556316 1086826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 192.168.39.241 22 <nil> <nil>}
I0603 12:24:50.556330 1086826 main.go:141] libmachine: About to run SSH command:
exit 0
I0603 12:24:50.656762 1086826 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0603 12:24:50.656797 1086826 main.go:141] libmachine: Detecting the provisioner...
I0603 12:24:50.656809 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
I0603 12:24:50.659643 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:50.660034 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:24:50.660063 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:50.660274 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
I0603 12:24:50.660454 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:24:50.660743 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:24:50.660933 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
I0603 12:24:50.661128 1086826 main.go:141] libmachine: Using SSH client type: native
I0603 12:24:50.661342 1086826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 192.168.39.241 22 <nil> <nil>}
I0603 12:24:50.661355 1086826 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I0603 12:24:50.763353 1086826 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2023.02.9-dirty
ID=buildroot
VERSION_ID=2023.02.9
PRETTY_NAME="Buildroot 2023.02.9"
I0603 12:24:50.763438 1086826 main.go:141] libmachine: found compatible host: buildroot
I0603 12:24:50.763447 1086826 main.go:141] libmachine: Provisioning with buildroot...
I0603 12:24:50.763457 1086826 main.go:141] libmachine: (addons-699562) Calling .GetMachineName
I0603 12:24:50.763772 1086826 buildroot.go:166] provisioning hostname "addons-699562"
I0603 12:24:50.763801 1086826 main.go:141] libmachine: (addons-699562) Calling .GetMachineName
I0603 12:24:50.764045 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
I0603 12:24:50.766806 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:50.767124 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:24:50.767155 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:50.767267 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
I0603 12:24:50.767455 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:24:50.767658 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:24:50.767810 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
I0603 12:24:50.768006 1086826 main.go:141] libmachine: Using SSH client type: native
I0603 12:24:50.768174 1086826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 192.168.39.241 22 <nil> <nil>}
I0603 12:24:50.768185 1086826 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-699562 && echo "addons-699562" | sudo tee /etc/hostname
I0603 12:24:50.884135 1086826 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-699562
I0603 12:24:50.884174 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
I0603 12:24:50.886988 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:50.887370 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:24:50.887399 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:50.887522 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
I0603 12:24:50.887747 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:24:50.887929 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:24:50.888065 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
I0603 12:24:50.888223 1086826 main.go:141] libmachine: Using SSH client type: native
I0603 12:24:50.888400 1086826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 192.168.39.241 22 <nil> <nil>}
I0603 12:24:50.888415 1086826 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-699562' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-699562/g' /etc/hosts;
else
echo '127.0.1.1 addons-699562' | sudo tee -a /etc/hosts;
fi
fi
I0603 12:24:50.994596 1086826 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0603 12:24:50.994627 1086826 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19011-1078924/.minikube CaCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19011-1078924/.minikube}
I0603 12:24:50.994678 1086826 buildroot.go:174] setting up certificates
I0603 12:24:50.994690 1086826 provision.go:84] configureAuth start
I0603 12:24:50.994705 1086826 main.go:141] libmachine: (addons-699562) Calling .GetMachineName
I0603 12:24:50.994999 1086826 main.go:141] libmachine: (addons-699562) Calling .GetIP
I0603 12:24:50.997877 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:50.998222 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:24:50.998250 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:50.998373 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
I0603 12:24:51.000223 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:51.000545 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:24:51.000580 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:51.000681 1086826 provision.go:143] copyHostCerts
I0603 12:24:51.000769 1086826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/key.pem (1675 bytes)
I0603 12:24:51.000904 1086826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.pem (1078 bytes)
I0603 12:24:51.000977 1086826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19011-1078924/.minikube/cert.pem (1123 bytes)
I0603 12:24:51.001042 1086826 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem org=jenkins.addons-699562 san=[127.0.0.1 192.168.39.241 addons-699562 localhost minikube]
I0603 12:24:51.342081 1086826 provision.go:177] copyRemoteCerts
I0603 12:24:51.342147 1086826 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0603 12:24:51.342179 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
I0603 12:24:51.344885 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:51.345246 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:24:51.345285 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:51.345439 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
I0603 12:24:51.345638 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:24:51.345834 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
I0603 12:24:51.345944 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
I0603 12:24:51.424011 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0603 12:24:51.448763 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I0603 12:24:51.472279 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0603 12:24:51.495752 1086826 provision.go:87] duration metric: took 501.043641ms to configureAuth
I0603 12:24:51.495788 1086826 buildroot.go:189] setting minikube options for container-runtime
I0603 12:24:51.495998 1086826 config.go:182] Loaded profile config "addons-699562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0603 12:24:51.496096 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
I0603 12:24:51.498510 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:51.498896 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:24:51.498926 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:51.499093 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
I0603 12:24:51.499296 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:24:51.499463 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:24:51.499633 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
I0603 12:24:51.499826 1086826 main.go:141] libmachine: Using SSH client type: native
I0603 12:24:51.500031 1086826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 192.168.39.241 22 <nil> <nil>}
I0603 12:24:51.500047 1086826 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I0603 12:24:51.754663 1086826 main.go:141] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I0603 12:24:51.754694 1086826 main.go:141] libmachine: Checking connection to Docker...
I0603 12:24:51.754702 1086826 main.go:141] libmachine: (addons-699562) Calling .GetURL
I0603 12:24:51.756019 1086826 main.go:141] libmachine: (addons-699562) DBG | Using libvirt version 6000000
I0603 12:24:51.758172 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:51.758540 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:24:51.758570 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:51.758742 1086826 main.go:141] libmachine: Docker is up and running!
I0603 12:24:51.758758 1086826 main.go:141] libmachine: Reticulating splines...
I0603 12:24:51.758766 1086826 client.go:171] duration metric: took 27.276637808s to LocalClient.Create
I0603 12:24:51.758788 1086826 start.go:167] duration metric: took 27.276710156s to libmachine.API.Create "addons-699562"
I0603 12:24:51.758798 1086826 start.go:293] postStartSetup for "addons-699562" (driver="kvm2")
I0603 12:24:51.758807 1086826 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0603 12:24:51.758824 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
I0603 12:24:51.759114 1086826 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0603 12:24:51.759147 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
I0603 12:24:51.761475 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:51.761749 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:24:51.761772 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:51.761911 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
I0603 12:24:51.762082 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:24:51.762241 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
I0603 12:24:51.762381 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
I0603 12:24:51.844343 1086826 ssh_runner.go:195] Run: cat /etc/os-release
I0603 12:24:51.848752 1086826 info.go:137] Remote host: Buildroot 2023.02.9
I0603 12:24:51.848851 1086826 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/addons for local assets ...
I0603 12:24:51.848922 1086826 filesync.go:126] Scanning /home/jenkins/minikube-integration/19011-1078924/.minikube/files for local assets ...
I0603 12:24:51.848944 1086826 start.go:296] duration metric: took 90.142044ms for postStartSetup
I0603 12:24:51.848980 1086826 main.go:141] libmachine: (addons-699562) Calling .GetConfigRaw
I0603 12:24:51.849638 1086826 main.go:141] libmachine: (addons-699562) Calling .GetIP
I0603 12:24:51.852138 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:51.852481 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:24:51.852518 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:51.852718 1086826 profile.go:143] Saving config to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/config.json ...
I0603 12:24:51.852881 1086826 start.go:128] duration metric: took 27.388970368s to createHost
I0603 12:24:51.852902 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
I0603 12:24:51.854845 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:51.855095 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:24:51.855124 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:51.855228 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
I0603 12:24:51.855393 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:24:51.855524 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:24:51.855619 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
I0603 12:24:51.855730 1086826 main.go:141] libmachine: Using SSH client type: native
I0603 12:24:51.855934 1086826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil> [] 0s} 192.168.39.241 22 <nil> <nil>}
I0603 12:24:51.855948 1086826 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0603 12:24:51.954192 1086826 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717417491.933231742
I0603 12:24:51.954226 1086826 fix.go:216] guest clock: 1717417491.933231742
I0603 12:24:51.954242 1086826 fix.go:229] Guest: 2024-06-03 12:24:51.933231742 +0000 UTC Remote: 2024-06-03 12:24:51.852891604 +0000 UTC m=+27.492675075 (delta=80.340138ms)
I0603 12:24:51.954295 1086826 fix.go:200] guest clock delta is within tolerance: 80.340138ms
I0603 12:24:51.954303 1086826 start.go:83] releasing machines lock for "addons-699562", held for 27.490498582s
I0603 12:24:51.954328 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
I0603 12:24:51.954622 1086826 main.go:141] libmachine: (addons-699562) Calling .GetIP
I0603 12:24:51.957183 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:51.957520 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:24:51.957549 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:51.957708 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
I0603 12:24:51.958214 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
I0603 12:24:51.958399 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
I0603 12:24:51.958512 1086826 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0603 12:24:51.958570 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
I0603 12:24:51.958629 1086826 ssh_runner.go:195] Run: cat /version.json
I0603 12:24:51.958653 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
I0603 12:24:51.961231 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:51.961545 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:24:51.961572 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:51.961595 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:51.961831 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
I0603 12:24:51.961927 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:24:51.961956 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:51.961990 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:24:51.962080 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
I0603 12:24:51.962171 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
I0603 12:24:51.962217 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:24:51.962347 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
I0603 12:24:51.962424 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
I0603 12:24:51.962504 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
I0603 12:24:52.034613 1086826 ssh_runner.go:195] Run: systemctl --version
I0603 12:24:52.061172 1086826 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I0603 12:24:52.225052 1086826 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0603 12:24:52.231581 1086826 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0603 12:24:52.231658 1086826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0603 12:24:52.250882 1086826 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0603 12:24:52.250910 1086826 start.go:494] detecting cgroup driver to use...
I0603 12:24:52.250982 1086826 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0603 12:24:52.271994 1086826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0603 12:24:52.288519 1086826 docker.go:217] disabling cri-docker service (if available) ...
I0603 12:24:52.288600 1086826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0603 12:24:52.304066 1086826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0603 12:24:52.318357 1086826 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0603 12:24:52.444110 1086826 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0603 12:24:52.607807 1086826 docker.go:233] disabling docker service ...
I0603 12:24:52.607888 1086826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0603 12:24:52.622983 1086826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0603 12:24:52.635763 1086826 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0603 12:24:52.756045 1086826 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0603 12:24:52.870974 1086826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0603 12:24:52.885958 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I0603 12:24:52.909939 1086826 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
I0603 12:24:52.910019 1086826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
I0603 12:24:52.920976 1086826 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I0603 12:24:52.921043 1086826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I0603 12:24:52.932075 1086826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I0603 12:24:52.943370 1086826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I0603 12:24:52.954401 1086826 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0603 12:24:52.965823 1086826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I0603 12:24:52.976875 1086826 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I0603 12:24:52.994792 1086826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I0603 12:24:53.006108 1086826 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0603 12:24:53.016055 1086826 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I0603 12:24:53.016127 1086826 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I0603 12:24:53.030064 1086826 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0603 12:24:53.039928 1086826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0603 12:24:53.156311 1086826 ssh_runner.go:195] Run: sudo systemctl restart crio
I0603 12:24:53.297085 1086826 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
I0603 12:24:53.297199 1086826 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I0603 12:24:53.302466 1086826 start.go:562] Will wait 60s for crictl version
I0603 12:24:53.302559 1086826 ssh_runner.go:195] Run: which crictl
I0603 12:24:53.306379 1086826 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0603 12:24:53.351831 1086826 start.go:578] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I0603 12:24:53.351927 1086826 ssh_runner.go:195] Run: crio --version
I0603 12:24:53.380029 1086826 ssh_runner.go:195] Run: crio --version
I0603 12:24:53.410556 1086826 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
I0603 12:24:53.411804 1086826 main.go:141] libmachine: (addons-699562) Calling .GetIP
I0603 12:24:53.414687 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:53.415038 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:24:53.415065 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:24:53.415276 1086826 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I0603 12:24:53.419753 1086826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0603 12:24:53.432681 1086826 kubeadm.go:877] updating cluster {Name:addons-699562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:addons-699562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0603 12:24:53.432799 1086826 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
I0603 12:24:53.432842 1086826 ssh_runner.go:195] Run: sudo crictl images --output json
I0603 12:24:53.466485 1086826 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
I0603 12:24:53.466571 1086826 ssh_runner.go:195] Run: which lz4
I0603 12:24:53.470862 1086826 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
I0603 12:24:53.475112 1086826 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I0603 12:24:53.475150 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
I0603 12:24:54.805803 1086826 crio.go:462] duration metric: took 1.334972428s to copy over tarball
I0603 12:24:54.805891 1086826 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I0603 12:24:57.079171 1086826 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.273232417s)
I0603 12:24:57.079222 1086826 crio.go:469] duration metric: took 2.273384926s to extract the tarball
I0603 12:24:57.079239 1086826 ssh_runner.go:146] rm: /preloaded.tar.lz4
I0603 12:24:57.118848 1086826 ssh_runner.go:195] Run: sudo crictl images --output json
I0603 12:24:57.171954 1086826 crio.go:514] all images are preloaded for cri-o runtime.
I0603 12:24:57.171984 1086826 cache_images.go:84] Images are preloaded, skipping loading
I0603 12:24:57.171995 1086826 kubeadm.go:928] updating node { 192.168.39.241 8443 v1.30.1 crio true true} ...
I0603 12:24:57.172114 1086826 kubeadm.go:940] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-699562 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.241
[Install]
config:
{KubernetesVersion:v1.30.1 ClusterName:addons-699562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0603 12:24:57.172180 1086826 ssh_runner.go:195] Run: crio config
I0603 12:24:57.226884 1086826 cni.go:84] Creating CNI manager for ""
I0603 12:24:57.226908 1086826 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I0603 12:24:57.226918 1086826 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0603 12:24:57.226941 1086826 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.241 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-699562 NodeName:addons-699562 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.241"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.241 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0603 12:24:57.227076 1086826 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.241
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-699562"
kubeletExtraArgs:
node-ip: 192.168.39.241
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.241"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.30.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0603 12:24:57.227137 1086826 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
I0603 12:24:57.239219 1086826 binaries.go:44] Found k8s binaries, skipping transfer
I0603 12:24:57.239289 1086826 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0603 12:24:57.250643 1086826 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
I0603 12:24:57.269452 1086826 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0603 12:24:57.289045 1086826 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
I0603 12:24:57.308060 1086826 ssh_runner.go:195] Run: grep 192.168.39.241 control-plane.minikube.internal$ /etc/hosts
I0603 12:24:57.312353 1086826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.241 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0603 12:24:57.326672 1086826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0603 12:24:57.469263 1086826 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0603 12:24:57.487461 1086826 certs.go:68] Setting up /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562 for IP: 192.168.39.241
I0603 12:24:57.487489 1086826 certs.go:194] generating shared ca certs ...
I0603 12:24:57.487508 1086826 certs.go:226] acquiring lock for ca certs: {Name:mkeec5aabce7c9540fcb31b78e4f96c2851d54f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0603 12:24:57.487662 1086826 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key
I0603 12:24:57.796136 1086826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt ...
I0603 12:24:57.796173 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt: {Name:mkf6899bfed4ad6512f084e6101d8170b87aa8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0603 12:24:57.796347 1086826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key ...
I0603 12:24:57.796359 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key: {Name:mkb9d4ed66614d50db2e65010103ad18fc38392f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0603 12:24:57.796434 1086826 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key
I0603 12:24:57.988064 1086826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt ...
I0603 12:24:57.988093 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt: {Name:mkab0d8277f7066917c19f74ecac4b98f17efe97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0603 12:24:57.988258 1086826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key ...
I0603 12:24:57.988269 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key: {Name:mkfdedf65267e5b22a2568e9daa9efca1f06a694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0603 12:24:57.988340 1086826 certs.go:256] generating profile certs ...
I0603 12:24:57.988401 1086826 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.key
I0603 12:24:57.988418 1086826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt with IP's: []
I0603 12:24:58.169717 1086826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt ...
I0603 12:24:58.169748 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.crt: {Name:mk0332016de9f15436fb308f06459566b4755678 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0603 12:24:58.169912 1086826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.key ...
I0603 12:24:58.169924 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/client.key: {Name:mkbd821f9271c2b7a33d746cd213fabc96fbeca6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0603 12:24:58.169995 1086826 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.key.fe8c4ec0
I0603 12:24:58.170014 1086826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.crt.fe8c4ec0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.241]
I0603 12:24:58.353654 1086826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.crt.fe8c4ec0 ...
I0603 12:24:58.353688 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.crt.fe8c4ec0: {Name:mk2efdc33db4a931854f6a87476a9e7c076c4560 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0603 12:24:58.353848 1086826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.key.fe8c4ec0 ...
I0603 12:24:58.353862 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.key.fe8c4ec0: {Name:mkbfd6a19ac77e29694cc3e059a9a211b4a91c26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0603 12:24:58.353944 1086826 certs.go:381] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.crt.fe8c4ec0 -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.crt
I0603 12:24:58.354032 1086826 certs.go:385] copying /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.key.fe8c4ec0 -> /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.key
I0603 12:24:58.354086 1086826 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/proxy-client.key
I0603 12:24:58.354106 1086826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/proxy-client.crt with IP's: []
I0603 12:24:58.527806 1086826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/proxy-client.crt ...
I0603 12:24:58.527842 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/proxy-client.crt: {Name:mkaadea5326f9442ed664027a21a81b1f09a2cbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0603 12:24:58.528017 1086826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/proxy-client.key ...
I0603 12:24:58.528030 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/proxy-client.key: {Name:mk8d4a5cdfed9257e413dc25422f47f0d4704dc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0603 12:24:58.528204 1086826 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca-key.pem (1679 bytes)
I0603 12:24:58.528243 1086826 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/ca.pem (1078 bytes)
I0603 12:24:58.528269 1086826 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/cert.pem (1123 bytes)
I0603 12:24:58.528291 1086826 certs.go:484] found cert: /home/jenkins/minikube-integration/19011-1078924/.minikube/certs/key.pem (1675 bytes)
I0603 12:24:58.528908 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0603 12:24:58.555501 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0603 12:24:58.579639 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0603 12:24:58.603222 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0603 12:24:58.626441 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I0603 12:24:58.650027 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0603 12:24:58.673722 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0603 12:24:58.696976 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/profiles/addons-699562/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0603 12:24:58.720137 1086826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19011-1078924/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0603 12:24:58.743306 1086826 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0603 12:24:58.760135 1086826 ssh_runner.go:195] Run: openssl version
I0603 12:24:58.766118 1086826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0603 12:24:58.777446 1086826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0603 12:24:58.781938 1086826 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jun 3 12:24 /usr/share/ca-certificates/minikubeCA.pem
I0603 12:24:58.781984 1086826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0603 12:24:58.787982 1086826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0603 12:24:58.799110 1086826 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0603 12:24:58.803751 1086826 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0603 12:24:58.803822 1086826 kubeadm.go:391] StartCluster: {Name:addons-699562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 C
lusterName:addons-699562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0603 12:24:58.803923 1086826 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I0603 12:24:58.803994 1086826 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0603 12:24:58.845118 1086826 cri.go:89] found id: ""
I0603 12:24:58.845210 1086826 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0603 12:24:58.855452 1086826 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0603 12:24:58.866068 1086826 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0603 12:24:58.876236 1086826 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0603 12:24:58.876269 1086826 kubeadm.go:156] found existing configuration files:
I0603 12:24:58.876322 1086826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0603 12:24:58.885660 1086826 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0603 12:24:58.885722 1086826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0603 12:24:58.895484 1086826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0603 12:24:58.904797 1086826 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0603 12:24:58.904849 1086826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0603 12:24:58.914443 1086826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0603 12:24:58.923913 1086826 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0603 12:24:58.923979 1086826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0603 12:24:58.936380 1086826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0603 12:24:58.946194 1086826 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0603 12:24:58.946246 1086826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0603 12:24:58.968281 1086826 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I0603 12:24:59.030885 1086826 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
I0603 12:24:59.030942 1086826 kubeadm.go:309] [preflight] Running pre-flight checks
I0603 12:24:59.154648 1086826 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
I0603 12:24:59.154824 1086826 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0603 12:24:59.154989 1086826 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0603 12:24:59.386626 1086826 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0603 12:24:59.388555 1086826 out.go:204] - Generating certificates and keys ...
I0603 12:24:59.388648 1086826 kubeadm.go:309] [certs] Using existing ca certificate authority
I0603 12:24:59.388729 1086826 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
I0603 12:24:59.509601 1086826 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
I0603 12:24:59.635592 1086826 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
I0603 12:24:59.705913 1086826 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
I0603 12:24:59.780001 1086826 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
I0603 12:24:59.863390 1086826 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
I0603 12:24:59.863715 1086826 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-699562 localhost] and IPs [192.168.39.241 127.0.0.1 ::1]
I0603 12:24:59.965490 1086826 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
I0603 12:24:59.965718 1086826 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-699562 localhost] and IPs [192.168.39.241 127.0.0.1 ::1]
I0603 12:25:00.170107 1086826 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
I0603 12:25:00.327566 1086826 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
I0603 12:25:00.439543 1086826 kubeadm.go:309] [certs] Generating "sa" key and public key
I0603 12:25:00.439669 1086826 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0603 12:25:00.535598 1086826 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
I0603 12:25:00.754190 1086826 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0603 12:25:00.905712 1086826 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0603 12:25:01.465978 1086826 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0603 12:25:01.632676 1086826 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0603 12:25:01.633547 1086826 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0603 12:25:01.637277 1086826 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0603 12:25:01.639011 1086826 out.go:204] - Booting up control plane ...
I0603 12:25:01.639143 1086826 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0603 12:25:01.639247 1086826 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0603 12:25:01.639361 1086826 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0603 12:25:01.655395 1086826 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0603 12:25:01.656324 1086826 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0603 12:25:01.656395 1086826 kubeadm.go:309] [kubelet-start] Starting the kubelet
I0603 12:25:01.797299 1086826 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0603 12:25:01.797451 1086826 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
I0603 12:25:02.797820 1086826 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.00116594s
I0603 12:25:02.797972 1086826 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0603 12:25:07.796996 1086826 kubeadm.go:309] [api-check] The API server is healthy after 5.001434435s
I0603 12:25:07.809118 1086826 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0603 12:25:07.824366 1086826 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0603 12:25:07.858549 1086826 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
I0603 12:25:07.858769 1086826 kubeadm.go:309] [mark-control-plane] Marking the node addons-699562 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0603 12:25:07.872341 1086826 kubeadm.go:309] [bootstrap-token] Using token: 949ojx.jojr63h99myrhn1a
I0603 12:25:07.873773 1086826 out.go:204] - Configuring RBAC rules ...
I0603 12:25:07.873890 1086826 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0603 12:25:07.890269 1086826 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0603 12:25:07.901714 1086826 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0603 12:25:07.905951 1086826 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0603 12:25:07.910910 1086826 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0603 12:25:07.915270 1086826 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0603 12:25:08.203573 1086826 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0603 12:25:08.642159 1086826 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
I0603 12:25:09.203978 1086826 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
I0603 12:25:09.206070 1086826 kubeadm.go:309]
I0603 12:25:09.206152 1086826 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
I0603 12:25:09.206165 1086826 kubeadm.go:309]
I0603 12:25:09.206239 1086826 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
I0603 12:25:09.206250 1086826 kubeadm.go:309]
I0603 12:25:09.206294 1086826 kubeadm.go:309] mkdir -p $HOME/.kube
I0603 12:25:09.206383 1086826 kubeadm.go:309] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0603 12:25:09.206468 1086826 kubeadm.go:309] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0603 12:25:09.206510 1086826 kubeadm.go:309]
I0603 12:25:09.206601 1086826 kubeadm.go:309] Alternatively, if you are the root user, you can run:
I0603 12:25:09.206613 1086826 kubeadm.go:309]
I0603 12:25:09.206679 1086826 kubeadm.go:309] export KUBECONFIG=/etc/kubernetes/admin.conf
I0603 12:25:09.206690 1086826 kubeadm.go:309]
I0603 12:25:09.206752 1086826 kubeadm.go:309] You should now deploy a pod network to the cluster.
I0603 12:25:09.206864 1086826 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0603 12:25:09.206964 1086826 kubeadm.go:309] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0603 12:25:09.207036 1086826 kubeadm.go:309]
I0603 12:25:09.207149 1086826 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
I0603 12:25:09.207264 1086826 kubeadm.go:309] and service account keys on each node and then running the following as root:
I0603 12:25:09.207280 1086826 kubeadm.go:309]
I0603 12:25:09.207387 1086826 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 949ojx.jojr63h99myrhn1a \
I0603 12:25:09.207541 1086826 kubeadm.go:309] --discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1 \
I0603 12:25:09.207575 1086826 kubeadm.go:309] --control-plane
I0603 12:25:09.207586 1086826 kubeadm.go:309]
I0603 12:25:09.207685 1086826 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
I0603 12:25:09.207694 1086826 kubeadm.go:309]
I0603 12:25:09.207813 1086826 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 949ojx.jojr63h99myrhn1a \
I0603 12:25:09.207922 1086826 kubeadm.go:309] --discovery-token-ca-cert-hash sha256:c33e9516f6d05db03b44f9194bafe44692a1b8ae1d860b8bc74f77578e93fdb1
I0603 12:25:09.209051 1086826 kubeadm.go:309] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0603 12:25:09.209091 1086826 cni.go:84] Creating CNI manager for ""
I0603 12:25:09.209105 1086826 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I0603 12:25:09.211191 1086826 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I0603 12:25:09.212411 1086826 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I0603 12:25:09.223015 1086826 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I0603 12:25:09.241025 1086826 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0603 12:25:09.241111 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-699562 minikube.k8s.io/updated_at=2024_06_03T12_25_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354 minikube.k8s.io/name=addons-699562 minikube.k8s.io/primary=true
I0603 12:25:09.241113 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:09.273153 1086826 ops.go:34] apiserver oom_adj: -16
I0603 12:25:09.379202 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:09.879303 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:10.379958 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:10.880244 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:11.380084 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:11.879530 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:12.380197 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:12.879382 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:13.379614 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:13.879715 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:14.379835 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:14.879796 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:15.380103 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:15.879378 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:16.379576 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:16.879485 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:17.379812 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:17.879440 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:18.379509 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:18.879789 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:19.379568 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:19.880105 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:20.380021 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:20.880130 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:21.380225 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:21.879752 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:22.380331 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:22.880058 1086826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0603 12:25:23.038068 1086826 kubeadm.go:1107] duration metric: took 13.797019699s to wait for elevateKubeSystemPrivileges
W0603 12:25:23.038132 1086826 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
I0603 12:25:23.038145 1086826 kubeadm.go:393] duration metric: took 24.234331356s to StartCluster
I0603 12:25:23.038180 1086826 settings.go:142] acquiring lock: {Name:mka7155af15d143794eb08b8670f7d850f44839e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0603 12:25:23.038355 1086826 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19011-1078924/kubeconfig
I0603 12:25:23.038990 1086826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19011-1078924/kubeconfig: {Name:mk082a4c41fd0f4876b4085806e1bc5ef6533b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0603 12:25:23.039268 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0603 12:25:23.039288 1086826 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.241 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I0603 12:25:23.041364 1086826 out.go:177] * Verifying Kubernetes components...
I0603 12:25:23.039372 1086826 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I0603 12:25:23.039478 1086826 config.go:182] Loaded profile config "addons-699562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0603 12:25:23.042813 1086826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0603 12:25:23.042835 1086826 addons.go:69] Setting yakd=true in profile "addons-699562"
I0603 12:25:23.042844 1086826 addons.go:69] Setting cloud-spanner=true in profile "addons-699562"
I0603 12:25:23.042871 1086826 addons.go:234] Setting addon cloud-spanner=true in "addons-699562"
I0603 12:25:23.042879 1086826 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-699562"
I0603 12:25:23.042882 1086826 addons.go:69] Setting metrics-server=true in profile "addons-699562"
I0603 12:25:23.042895 1086826 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-699562"
I0603 12:25:23.042929 1086826 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-699562"
I0603 12:25:23.042943 1086826 addons.go:69] Setting volcano=true in profile "addons-699562"
I0603 12:25:23.042955 1086826 addons.go:69] Setting storage-provisioner=true in profile "addons-699562"
I0603 12:25:23.042965 1086826 addons.go:69] Setting registry=true in profile "addons-699562"
I0603 12:25:23.042969 1086826 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-699562"
I0603 12:25:23.042983 1086826 addons.go:234] Setting addon registry=true in "addons-699562"
I0603 12:25:23.042971 1086826 addons.go:234] Setting addon volcano=true in "addons-699562"
I0603 12:25:23.043048 1086826 host.go:66] Checking if "addons-699562" exists ...
I0603 12:25:23.043107 1086826 host.go:66] Checking if "addons-699562" exists ...
I0603 12:25:23.042871 1086826 addons.go:234] Setting addon yakd=true in "addons-699562"
I0603 12:25:23.043183 1086826 host.go:66] Checking if "addons-699562" exists ...
I0603 12:25:23.042936 1086826 addons.go:69] Setting gcp-auth=true in profile "addons-699562"
I0603 12:25:23.043240 1086826 mustload.go:65] Loading cluster: addons-699562
I0603 12:25:23.043428 1086826 config.go:182] Loaded profile config "addons-699562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0603 12:25:23.043518 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.043550 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.043568 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.043579 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.043598 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.043609 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.043628 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.043668 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.042915 1086826 host.go:66] Checking if "addons-699562" exists ...
I0603 12:25:23.042907 1086826 addons.go:234] Setting addon metrics-server=true in "addons-699562"
I0603 12:25:23.044377 1086826 host.go:66] Checking if "addons-699562" exists ...
I0603 12:25:23.042917 1086826 addons.go:69] Setting default-storageclass=true in profile "addons-699562"
I0603 12:25:23.044579 1086826 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-699562"
I0603 12:25:23.045076 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.045155 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.045255 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.045307 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.042925 1086826 addons.go:69] Setting ingress=true in profile "addons-699562"
I0603 12:25:23.045756 1086826 addons.go:234] Setting addon ingress=true in "addons-699562"
I0603 12:25:23.045822 1086826 host.go:66] Checking if "addons-699562" exists ...
I0603 12:25:23.046281 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.046315 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.042989 1086826 host.go:66] Checking if "addons-699562" exists ...
I0603 12:25:23.042936 1086826 addons.go:69] Setting inspektor-gadget=true in profile "addons-699562"
I0603 12:25:23.047032 1086826 addons.go:234] Setting addon inspektor-gadget=true in "addons-699562"
I0603 12:25:23.047071 1086826 host.go:66] Checking if "addons-699562" exists ...
I0603 12:25:23.042943 1086826 addons.go:69] Setting helm-tiller=true in profile "addons-699562"
I0603 12:25:23.047185 1086826 addons.go:234] Setting addon helm-tiller=true in "addons-699562"
I0603 12:25:23.047212 1086826 host.go:66] Checking if "addons-699562" exists ...
I0603 12:25:23.047273 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.047353 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.047374 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.042988 1086826 addons.go:234] Setting addon storage-provisioner=true in "addons-699562"
I0603 12:25:23.047674 1086826 host.go:66] Checking if "addons-699562" exists ...
I0603 12:25:23.042972 1086826 addons.go:69] Setting volumesnapshots=true in profile "addons-699562"
I0603 12:25:23.047777 1086826 addons.go:234] Setting addon volumesnapshots=true in "addons-699562"
I0603 12:25:23.047830 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.047846 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.047862 1086826 host.go:66] Checking if "addons-699562" exists ...
I0603 12:25:23.047866 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.047882 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.044547 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.042832 1086826 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-699562"
I0603 12:25:23.048354 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.042928 1086826 addons.go:69] Setting ingress-dns=true in profile "addons-699562"
I0603 12:25:23.048397 1086826 addons.go:234] Setting addon ingress-dns=true in "addons-699562"
I0603 12:25:23.048437 1086826 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-699562"
I0603 12:25:23.048446 1086826 host.go:66] Checking if "addons-699562" exists ...
I0603 12:25:23.048477 1086826 host.go:66] Checking if "addons-699562" exists ...
I0603 12:25:23.047570 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.049263 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.049297 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.053615 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.054040 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.069220 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44425
I0603 12:25:23.073731 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33585
I0603 12:25:23.073786 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42327
I0603 12:25:23.073889 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41657
I0603 12:25:23.074001 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34815
I0603 12:25:23.074277 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.074328 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.074963 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.075006 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.075941 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.076110 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.076230 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.076319 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.076403 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.077634 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.077660 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.077833 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.077853 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.077978 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.077989 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.078025 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.078054 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.078121 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.078545 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.078588 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.079218 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.079264 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.089204 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36147
I0603 12:25:23.089281 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.089382 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
I0603 12:25:23.089488 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.089515 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.090485 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.090522 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.091428 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.091439 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.091998 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.092019 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.092018 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.092061 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.092561 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.092601 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.104451 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34635
I0603 12:25:23.104719 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46329
I0603 12:25:23.105270 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.105974 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37029
I0603 12:25:23.106145 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.106224 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42751
I0603 12:25:23.106264 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.106336 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36991
I0603 12:25:23.106543 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.106556 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.106884 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.106914 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.107363 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.107374 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.107456 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.108069 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.108094 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.108126 1086826 addons.go:234] Setting addon default-storageclass=true in "addons-699562"
I0603 12:25:23.108164 1086826 host.go:66] Checking if "addons-699562" exists ...
I0603 12:25:23.108242 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.108255 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.108522 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.108553 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.108768 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.108838 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35381
I0603 12:25:23.109309 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.109376 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.110138 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.110159 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.110602 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.110637 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.111158 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.111762 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.111797 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.113967 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.114019 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.114729 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.114755 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.115240 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.115820 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.115863 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.116533 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.116565 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.116932 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.117491 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.117527 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.121971 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.122294 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
I0603 12:25:23.125513 1086826 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-699562"
I0603 12:25:23.125565 1086826 host.go:66] Checking if "addons-699562" exists ...
I0603 12:25:23.125951 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.126003 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.127933 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39777
I0603 12:25:23.128409 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.129023 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.129046 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.133336 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38839
I0603 12:25:23.133978 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.134032 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.134207 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
I0603 12:25:23.134828 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.134855 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.135234 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.135477 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
I0603 12:25:23.137625 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
I0603 12:25:23.139847 1086826 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
I0603 12:25:23.138376 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
I0603 12:25:23.140132 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46499
I0603 12:25:23.142584 1086826 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
I0603 12:25:23.140984 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40225
I0603 12:25:23.141908 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.143938 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34971
I0603 12:25:23.144587 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40201
I0603 12:25:23.144635 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.145368 1086826 out.go:177] - Using image docker.io/registry:2.8.3
I0603 12:25:23.145978 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.146582 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.146643 1086826 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
I0603 12:25:23.148445 1086826 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
I0603 12:25:23.148467 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I0603 12:25:23.148489 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
I0603 12:25:23.146662 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39681
I0603 12:25:23.146051 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.147210 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.147313 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.147468 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.149199 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.150420 1086826 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
I0603 12:25:23.150511 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.151060 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.152055 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.151064 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.152103 1086826 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I0603 12:25:23.152121 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
I0603 12:25:23.152142 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
I0603 12:25:23.152172 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.152107 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.151702 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
I0603 12:25:23.151488 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.152265 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.152469 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.152541 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.152630 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.153074 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.153116 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.153228 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.153262 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.153264 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.153299 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.153701 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.153771 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:25:23.153798 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.153986 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
I0603 12:25:23.154082 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
I0603 12:25:23.154329 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:25:23.154563 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
I0603 12:25:23.154740 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
I0603 12:25:23.155872 1086826 host.go:66] Checking if "addons-699562" exists ...
I0603 12:25:23.156261 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.156302 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.156862 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
I0603 12:25:23.157120 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.158786 1086826 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I0603 12:25:23.157660 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:25:23.157984 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
I0603 12:25:23.160125 1086826 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I0603 12:25:23.160139 1086826 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I0603 12:25:23.160159 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
I0603 12:25:23.160204 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.160399 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:25:23.160579 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
I0603 12:25:23.160730 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
I0603 12:25:23.161513 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34551
I0603 12:25:23.162147 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.162765 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.162792 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.163193 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.163488 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
I0603 12:25:23.163614 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42687
I0603 12:25:23.164160 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.164193 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.164587 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:25:23.164614 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.164750 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.164763 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.164805 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
I0603 12:25:23.164974 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:25:23.165184 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
I0603 12:25:23.165234 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.165357 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
I0603 12:25:23.165664 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
I0603 12:25:23.166081 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
I0603 12:25:23.166343 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:23.166357 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:23.166891 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:23.166905 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:23.166914 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:23.166921 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:23.167166 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:23.167199 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:23.167207 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
W0603 12:25:23.167307 1086826 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I0603 12:25:23.167541 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
I0603 12:25:23.169814 1086826 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I0603 12:25:23.169073 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40559
I0603 12:25:23.171125 1086826 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I0603 12:25:23.172468 1086826 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I0603 12:25:23.171517 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.175038 1086826 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I0603 12:25:23.174482 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.176419 1086826 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I0603 12:25:23.176434 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.177888 1086826 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I0603 12:25:23.180053 1086826 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I0603 12:25:23.178457 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.180020 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36625
I0603 12:25:23.183356 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36483
I0603 12:25:23.183365 1086826 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I0603 12:25:23.181980 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
I0603 12:25:23.182398 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.182436 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38145
I0603 12:25:23.182860 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46285
I0603 12:25:23.183773 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.185766 1086826 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I0603 12:25:23.185787 1086826 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I0603 12:25:23.185818 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
I0603 12:25:23.184597 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36141
I0603 12:25:23.185463 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45951
I0603 12:25:23.186847 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.186864 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.186875 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.186883 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.187209 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.187318 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.187375 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.187761 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.187866 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
I0603 12:25:23.188143 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.188161 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.188291 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
I0603 12:25:23.188305 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.188319 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.188849 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.190945 1086826 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.29.0
I0603 12:25:23.189483 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.189515 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.189541 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
I0603 12:25:23.189703 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38189
I0603 12:25:23.190094 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.190186 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38531
I0603 12:25:23.190263 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
I0603 12:25:23.190561 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.191601 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.192351 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
I0603 12:25:23.193396 1086826 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
I0603 12:25:23.193467 1086826 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
I0603 12:25:23.193473 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:25:23.193494 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
I0603 12:25:23.193500 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.193539 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.194368 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:25:23.194396 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.194450 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.196235 1086826 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0603 12:25:23.194512 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
I0603 12:25:23.194692 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
I0603 12:25:23.195573 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.195618 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
I0603 12:25:23.195621 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.195676 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
I0603 12:25:23.195711 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.195757 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.196201 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
I0603 12:25:23.197205 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.197890 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:25:23.197916 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.197972 1086826 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0603 12:25:23.197983 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0603 12:25:23.198001 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
I0603 12:25:23.198024 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.198035 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.197833 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
I0603 12:25:23.198131 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
I0603 12:25:23.198185 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.198474 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
I0603 12:25:23.198547 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.200480 1086826 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.4
I0603 12:25:23.199048 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.199079 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:25:23.199482 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.199522 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.199896 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.201123 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
I0603 12:25:23.201303 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
I0603 12:25:23.201308 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
I0603 12:25:23.201884 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.202138 1086826 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I0603 12:25:23.202157 1086826 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I0603 12:25:23.202175 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
I0603 12:25:23.202289 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.202978 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
I0603 12:25:23.203030 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
I0603 12:25:23.204877 1086826 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
I0603 12:25:23.203059 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
I0603 12:25:23.203097 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:25:23.203124 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
I0603 12:25:23.203254 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.204827 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
I0603 12:25:23.205130 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
I0603 12:25:23.205471 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
I0603 12:25:23.206028 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.206324 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.206371 1086826 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I0603 12:25:23.206656 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:25:23.206651 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
I0603 12:25:23.206709 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
I0603 12:25:23.207699 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.207618 1086826 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
I0603 12:25:23.207788 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:25:23.207805 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I0603 12:25:23.209168 1086826 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0603 12:25:23.209191 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I0603 12:25:23.209215 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
I0603 12:25:23.209173 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
I0603 12:25:23.208391 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.208423 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
I0603 12:25:23.208443 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:25:23.210668 1086826 out.go:177] - Using image ghcr.io/helm/tiller:v2.17.0
I0603 12:25:23.208815 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.209149 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.207681 1086826 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
I0603 12:25:23.209933 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
I0603 12:25:23.209956 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
I0603 12:25:23.209992 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
I0603 12:25:23.212401 1086826 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
I0603 12:25:23.212427 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
I0603 12:25:23.212450 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
I0603 12:25:23.212513 1086826 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
I0603 12:25:23.214086 1086826 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0603 12:25:23.214106 1086826 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0603 12:25:23.214136 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
I0603 12:25:23.212668 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.214207 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:25:23.214231 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.215694 1086826 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I0603 12:25:23.213493 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
I0603 12:25:23.215710 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I0603 12:25:23.215726 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
I0603 12:25:23.213529 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
I0603 12:25:23.213766 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.216403 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.213791 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.215381 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
I0603 12:25:23.216476 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:25:23.216335 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:25:23.216495 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.216348 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.216526 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:25:23.216541 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.216699 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
I0603 12:25:23.216705 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:25:23.216755 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
I0603 12:25:23.216872 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
I0603 12:25:23.216928 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
I0603 12:25:23.216996 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:25:23.217163 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
I0603 12:25:23.217220 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.217347 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
I0603 12:25:23.217506 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
I0603 12:25:23.218087 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:23.218182 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:23.218381 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.218727 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:25:23.218766 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.218941 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
I0603 12:25:23.219129 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:25:23.221916 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
I0603 12:25:23.221953 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.221958 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
I0603 12:25:23.221979 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:25:23.221998 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.222143 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
I0603 12:25:23.222152 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:25:23.222332 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
I0603 12:25:23.222454 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
I0603 12:25:23.225717 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33403
I0603 12:25:23.254144 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.254702 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.254723 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
W0603 12:25:23.255054 1086826 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43442->192.168.39.241:22: read: connection reset by peer
I0603 12:25:23.255084 1086826 retry.go:31] will retry after 338.902816ms: ssh: handshake failed: read tcp 192.168.39.1:43442->192.168.39.241:22: read: connection reset by peer
I0603 12:25:23.255125 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.255337 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
I0603 12:25:23.256992 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
I0603 12:25:23.257254 1086826 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0603 12:25:23.257274 1086826 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0603 12:25:23.257293 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
I0603 12:25:23.260592 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.261091 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:25:23.261117 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.261326 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
I0603 12:25:23.261563 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:25:23.261725 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
I0603 12:25:23.261873 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
I0603 12:25:23.271376 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33833
I0603 12:25:23.271765 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:23.272295 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:23.272319 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:23.272678 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:23.272874 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
I0603 12:25:23.274522 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
I0603 12:25:23.276422 1086826 out.go:177] - Using image docker.io/busybox:stable
I0603 12:25:23.278088 1086826 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I0603 12:25:23.279525 1086826 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0603 12:25:23.279549 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I0603 12:25:23.279573 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
I0603 12:25:23.282585 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.283142 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:25:23.283175 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:23.283311 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
I0603 12:25:23.283518 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:25:23.283758 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
I0603 12:25:23.283941 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
I0603 12:25:23.664725 1086826 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I0603 12:25:23.664753 1086826 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I0603 12:25:23.703703 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I0603 12:25:23.707713 1086826 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
I0603 12:25:23.707747 1086826 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
I0603 12:25:23.751402 1086826 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I0603 12:25:23.751436 1086826 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I0603 12:25:23.753543 1086826 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0603 12:25:23.755796 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0603 12:25:23.759663 1086826 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I0603 12:25:23.759693 1086826 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I0603 12:25:23.770791 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0603 12:25:23.831389 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I0603 12:25:23.841828 1086826 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
I0603 12:25:23.841855 1086826 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
I0603 12:25:23.852519 1086826 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0603 12:25:23.852587 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I0603 12:25:23.862595 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I0603 12:25:23.878492 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I0603 12:25:23.880249 1086826 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I0603 12:25:23.880280 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I0603 12:25:23.888423 1086826 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I0603 12:25:23.888449 1086826 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I0603 12:25:23.929569 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0603 12:25:23.954649 1086826 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
I0603 12:25:23.954686 1086826 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
I0603 12:25:24.003881 1086826 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I0603 12:25:24.003919 1086826 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I0603 12:25:24.012187 1086826 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0603 12:25:24.012219 1086826 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0603 12:25:24.069615 1086826 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I0603 12:25:24.069647 1086826 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I0603 12:25:24.071510 1086826 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I0603 12:25:24.071533 1086826 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I0603 12:25:24.090048 1086826 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
I0603 12:25:24.090085 1086826 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
I0603 12:25:24.094186 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I0603 12:25:24.142839 1086826 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0603 12:25:24.142888 1086826 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0603 12:25:24.188372 1086826 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I0603 12:25:24.188411 1086826 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I0603 12:25:24.191010 1086826 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
I0603 12:25:24.191034 1086826 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
I0603 12:25:24.196505 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I0603 12:25:24.242688 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
I0603 12:25:24.295016 1086826 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I0603 12:25:24.295050 1086826 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I0603 12:25:24.313366 1086826 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I0603 12:25:24.313401 1086826 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I0603 12:25:24.327127 1086826 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I0603 12:25:24.327158 1086826 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I0603 12:25:24.368069 1086826 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
I0603 12:25:24.368097 1086826 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
I0603 12:25:24.430388 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0603 12:25:24.468724 1086826 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I0603 12:25:24.468763 1086826 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I0603 12:25:24.491688 1086826 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I0603 12:25:24.491709 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I0603 12:25:24.507202 1086826 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0603 12:25:24.507226 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I0603 12:25:24.573395 1086826 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
I0603 12:25:24.573434 1086826 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
I0603 12:25:24.673069 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0603 12:25:24.710678 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I0603 12:25:24.740006 1086826 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I0603 12:25:24.740041 1086826 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I0603 12:25:24.774202 1086826 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I0603 12:25:24.774240 1086826 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
I0603 12:25:25.120454 1086826 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I0603 12:25:25.120483 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I0603 12:25:25.139484 1086826 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
I0603 12:25:25.139513 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
I0603 12:25:25.367527 1086826 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I0603 12:25:25.367559 1086826 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I0603 12:25:25.399703 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
I0603 12:25:25.664579 1086826 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I0603 12:25:25.664608 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I0603 12:25:25.965006 1086826 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I0603 12:25:25.965039 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I0603 12:25:26.478180 1086826 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0603 12:25:26.478220 1086826 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I0603 12:25:26.694814 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I0603 12:25:30.234960 1086826 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I0603 12:25:30.235007 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
I0603 12:25:30.238190 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:30.238600 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:25:30.238635 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:30.238823 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
I0603 12:25:30.239054 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:25:30.239222 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
I0603 12:25:30.239392 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
I0603 12:25:30.740050 1086826 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I0603 12:25:30.851470 1086826 addons.go:234] Setting addon gcp-auth=true in "addons-699562"
I0603 12:25:30.851557 1086826 host.go:66] Checking if "addons-699562" exists ...
I0603 12:25:30.852058 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:30.852094 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:30.868185 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42573
I0603 12:25:30.868773 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:30.869431 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:30.869461 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:30.869815 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:30.870344 1086826 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0603 12:25:30.870377 1086826 main.go:141] libmachine: Launching plugin server for driver kvm2
I0603 12:25:30.886448 1086826 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46435
I0603 12:25:30.886966 1086826 main.go:141] libmachine: () Calling .GetVersion
I0603 12:25:30.887481 1086826 main.go:141] libmachine: Using API Version 1
I0603 12:25:30.887505 1086826 main.go:141] libmachine: () Calling .SetConfigRaw
I0603 12:25:30.887824 1086826 main.go:141] libmachine: () Calling .GetMachineName
I0603 12:25:30.888034 1086826 main.go:141] libmachine: (addons-699562) Calling .GetState
I0603 12:25:30.889619 1086826 main.go:141] libmachine: (addons-699562) Calling .DriverName
I0603 12:25:30.889859 1086826 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I0603 12:25:30.889888 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHHostname
I0603 12:25:30.892565 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:30.893052 1086826 main.go:141] libmachine: (addons-699562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:ff:f6", ip: ""} in network mk-addons-699562: {Iface:virbr1 ExpiryTime:2024-06-03 13:24:39 +0000 UTC Type:0 Mac:52:54:00:d2:ff:f6 Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:addons-699562 Clientid:01:52:54:00:d2:ff:f6}
I0603 12:25:30.893120 1086826 main.go:141] libmachine: (addons-699562) DBG | domain addons-699562 has defined IP address 192.168.39.241 and MAC address 52:54:00:d2:ff:f6 in network mk-addons-699562
I0603 12:25:30.893241 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHPort
I0603 12:25:30.893420 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHKeyPath
I0603 12:25:30.893579 1086826 main.go:141] libmachine: (addons-699562) Calling .GetSSHUsername
I0603 12:25:30.893836 1086826 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19011-1078924/.minikube/machines/addons-699562/id_rsa Username:docker}
I0603 12:25:31.588516 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.884760346s)
I0603 12:25:31.588543 1086826 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.834961071s)
I0603 12:25:31.588585 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:31.588588 1086826 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.832761998s)
I0603 12:25:31.588599 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:31.588607 1086826 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I0603 12:25:31.588647 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.817821512s)
I0603 12:25:31.588690 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:31.588707 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:31.588761 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.72613988s)
I0603 12:25:31.588790 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:31.588798 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:31.588816 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.710295183s)
I0603 12:25:31.588707 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.757290038s)
I0603 12:25:31.588849 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:31.588855 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:31.588835 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:31.588888 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.659283987s)
I0603 12:25:31.588892 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:31.588903 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:31.588912 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:31.588951 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.494737371s)
I0603 12:25:31.588970 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:31.588977 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:31.588991 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.392463973s)
I0603 12:25:31.589009 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:31.589017 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:31.589046 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.346321083s)
I0603 12:25:31.589063 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:31.589073 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:31.589124 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.158682793s)
I0603 12:25:31.589141 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:31.589151 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:31.589293 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.916193022s)
W0603 12:25:31.589322 1086826 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0603 12:25:31.589348 1086826 retry.go:31] will retry after 269.565045ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I0603 12:25:31.589443 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.878723157s)
I0603 12:25:31.589471 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:31.589483 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:31.589514 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:31.589540 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:31.589573 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:31.589583 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:31.589592 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:31.589600 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.18985866s)
I0603 12:25:31.589630 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:31.589633 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:31.589651 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:31.589609 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:31.589720 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:31.589732 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:31.589736 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:31.589739 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:31.589746 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:31.589767 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:31.589777 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:31.589785 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:31.589791 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:31.589792 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:31.589802 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:31.589812 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:31.589815 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:31.589819 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:31.589822 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:31.589830 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:31.589836 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:31.589792 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:31.589877 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:31.589899 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:31.589906 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:31.589915 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:31.589922 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:31.589987 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:31.590005 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:31.590022 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:31.590027 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:31.590181 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:31.590194 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:31.590201 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:31.590207 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:31.590256 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:31.590277 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:31.590287 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:31.590294 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:31.590303 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:31.590339 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:31.590361 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:31.590374 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:31.590381 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:31.590388 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:31.590432 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:31.590455 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:31.590462 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:31.590469 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:31.590475 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:31.590535 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:31.590556 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:31.590563 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:31.590572 1086826 addons.go:475] Verifying addon metrics-server=true in "addons-699562"
I0603 12:25:31.592647 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:31.592688 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:31.592699 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:31.593151 1086826 node_ready.go:35] waiting up to 6m0s for node "addons-699562" to be "Ready" ...
I0603 12:25:31.593360 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:31.593391 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:31.593401 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:31.593427 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:31.593437 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:31.593482 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:31.593507 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:31.593513 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:31.595143 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:31.595175 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:31.595183 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:31.595271 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:31.595289 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:31.595296 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:31.595303 1086826 addons.go:475] Verifying addon ingress=true in "addons-699562"
I0603 12:25:31.598115 1086826 out.go:177] * Verifying ingress addon...
I0603 12:25:31.595658 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:31.595682 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:31.595698 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:31.595717 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:31.595736 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:31.595755 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:31.595770 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:31.595787 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:31.596138 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:31.596162 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:31.596337 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:31.596353 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:31.599578 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:31.599608 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:31.599609 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:31.599614 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:31.599672 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:31.599618 1086826 addons.go:475] Verifying addon registry=true in "addons-699562"
I0603 12:25:31.599705 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:31.601327 1086826 out.go:177] * Verifying registry addon...
I0603 12:25:31.599623 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:31.600355 1086826 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I0603 12:25:31.603078 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:31.603348 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:31.603401 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:31.603419 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:31.604909 1086826 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-699562 service yakd-dashboard -n yakd-dashboard
I0603 12:25:31.603845 1086826 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I0603 12:25:31.618188 1086826 node_ready.go:49] node "addons-699562" has status "Ready":"True"
I0603 12:25:31.618210 1086826 node_ready.go:38] duration metric: took 25.032701ms for node "addons-699562" to be "Ready" ...
I0603 12:25:31.618219 1086826 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0603 12:25:31.619958 1086826 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I0603 12:25:31.619977 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:31.629749 1086826 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I0603 12:25:31.629771 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:31.643074 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:31.643101 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:31.643239 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:31.643263 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:31.643396 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:31.643421 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:31.643530 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:31.643580 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:31.643589 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
W0603 12:25:31.643687 1086826 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I0603 12:25:31.661700 1086826 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hmhdl" in "kube-system" namespace to be "Ready" ...
I0603 12:25:31.709539 1086826 pod_ready.go:92] pod "coredns-7db6d8ff4d-hmhdl" in "kube-system" namespace has status "Ready":"True"
I0603 12:25:31.709561 1086826 pod_ready.go:81] duration metric: took 47.835085ms for pod "coredns-7db6d8ff4d-hmhdl" in "kube-system" namespace to be "Ready" ...
I0603 12:25:31.709572 1086826 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qjklp" in "kube-system" namespace to be "Ready" ...
I0603 12:25:31.749241 1086826 pod_ready.go:92] pod "coredns-7db6d8ff4d-qjklp" in "kube-system" namespace has status "Ready":"True"
I0603 12:25:31.749267 1086826 pod_ready.go:81] duration metric: took 39.689686ms for pod "coredns-7db6d8ff4d-qjklp" in "kube-system" namespace to be "Ready" ...
I0603 12:25:31.749278 1086826 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-699562" in "kube-system" namespace to be "Ready" ...
I0603 12:25:31.782613 1086826 pod_ready.go:92] pod "etcd-addons-699562" in "kube-system" namespace has status "Ready":"True"
I0603 12:25:31.782641 1086826 pod_ready.go:81] duration metric: took 33.356992ms for pod "etcd-addons-699562" in "kube-system" namespace to be "Ready" ...
I0603 12:25:31.782651 1086826 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-699562" in "kube-system" namespace to be "Ready" ...
I0603 12:25:31.827401 1086826 pod_ready.go:92] pod "kube-apiserver-addons-699562" in "kube-system" namespace has status "Ready":"True"
I0603 12:25:31.827426 1086826 pod_ready.go:81] duration metric: took 44.767786ms for pod "kube-apiserver-addons-699562" in "kube-system" namespace to be "Ready" ...
I0603 12:25:31.827438 1086826 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-699562" in "kube-system" namespace to be "Ready" ...
I0603 12:25:31.860140 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I0603 12:25:31.996499 1086826 pod_ready.go:92] pod "kube-controller-manager-addons-699562" in "kube-system" namespace has status "Ready":"True"
I0603 12:25:31.996535 1086826 pod_ready.go:81] duration metric: took 169.090158ms for pod "kube-controller-manager-addons-699562" in "kube-system" namespace to be "Ready" ...
I0603 12:25:31.996551 1086826 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6ssr8" in "kube-system" namespace to be "Ready" ...
I0603 12:25:32.092808 1086826 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-699562" context rescaled to 1 replicas
I0603 12:25:32.107781 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:32.113006 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:32.397400 1086826 pod_ready.go:92] pod "kube-proxy-6ssr8" in "kube-system" namespace has status "Ready":"True"
I0603 12:25:32.397456 1086826 pod_ready.go:81] duration metric: took 400.897369ms for pod "kube-proxy-6ssr8" in "kube-system" namespace to be "Ready" ...
I0603 12:25:32.397471 1086826 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-699562" in "kube-system" namespace to be "Ready" ...
I0603 12:25:32.609145 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:32.625118 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:32.804548 1086826 pod_ready.go:92] pod "kube-scheduler-addons-699562" in "kube-system" namespace has status "Ready":"True"
I0603 12:25:32.804582 1086826 pod_ready.go:81] duration metric: took 407.101572ms for pod "kube-scheduler-addons-699562" in "kube-system" namespace to be "Ready" ...
I0603 12:25:32.804597 1086826 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace to be "Ready" ...
I0603 12:25:33.108552 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:33.128901 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:33.534656 1086826 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.644766775s)
I0603 12:25:33.536193 1086826 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
I0603 12:25:33.534859 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.839987769s)
I0603 12:25:33.536260 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:33.536276 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:33.539162 1086826 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
I0603 12:25:33.538004 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:33.538036 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:33.540446 1086826 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I0603 12:25:33.540459 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:33.540472 1086826 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I0603 12:25:33.540478 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:33.540491 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:33.540734 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:33.540752 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:33.540765 1086826 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-699562"
I0603 12:25:33.542102 1086826 out.go:177] * Verifying csi-hostpath-driver addon...
I0603 12:25:33.544077 1086826 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0603 12:25:33.568601 1086826 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0603 12:25:33.568623 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:33.641891 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:33.645693 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:33.791417 1086826 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I0603 12:25:33.791441 1086826 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I0603 12:25:33.842537 1086826 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0603 12:25:33.842563 1086826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I0603 12:25:33.962934 1086826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I0603 12:25:34.049552 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:34.107653 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:34.110594 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:34.205263 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.345061511s)
I0603 12:25:34.205315 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:34.205328 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:34.205726 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:34.205744 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:34.205755 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:34.205786 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:34.205838 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:34.206122 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:34.206140 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:34.206142 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:34.564029 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:34.621424 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:34.621952 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:34.812209 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:25:35.053298 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:35.111227 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:35.115219 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:35.560207 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:35.607579 1086826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.644597027s)
I0603 12:25:35.607651 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:35.607672 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:35.608000 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:35.608020 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:35.608030 1086826 main.go:141] libmachine: Making call to close driver server
I0603 12:25:35.608038 1086826 main.go:141] libmachine: (addons-699562) Calling .Close
I0603 12:25:35.608049 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:35.608290 1086826 main.go:141] libmachine: Successfully made call to close driver server
I0603 12:25:35.608305 1086826 main.go:141] libmachine: (addons-699562) DBG | Closing plugin on server side
I0603 12:25:35.608315 1086826 main.go:141] libmachine: Making call to close connection to plugin binary
I0603 12:25:35.609668 1086826 addons.go:475] Verifying addon gcp-auth=true in "addons-699562"
I0603 12:25:35.611353 1086826 out.go:177] * Verifying gcp-auth addon...
I0603 12:25:35.613544 1086826 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I0603 12:25:35.622615 1086826 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I0603 12:25:35.622636 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:35.622740 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:35.630159 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:36.049713 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:36.107880 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:36.111569 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:36.116514 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:36.550083 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:36.607537 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:36.610472 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:36.616565 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:37.049483 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:37.108073 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:37.112270 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:37.116999 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:37.310101 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:25:37.551354 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:37.607806 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:37.611159 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:37.616391 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:38.050593 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:38.108207 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:38.110598 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:38.116688 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:38.550813 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:38.608089 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:38.610368 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:38.617203 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:39.050555 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:39.109214 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:39.111875 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:39.117025 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:39.311090 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:25:39.550464 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:39.609235 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:39.619822 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:39.623829 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:40.050197 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:40.107940 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:40.111079 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:40.117190 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:40.555709 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:40.606871 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:40.610515 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:40.617821 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:41.051097 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:41.108305 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:41.112441 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:41.120952 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:41.550698 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:41.607766 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:41.611170 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:41.616572 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:41.812095 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:25:42.056884 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:42.107517 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:42.110628 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:42.117448 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:42.550228 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:42.607367 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:42.610776 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:42.617304 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:43.050537 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:43.107475 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:43.110791 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:43.117489 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:43.549629 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:43.607992 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:43.610648 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:43.617442 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:44.050927 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:44.108088 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:44.111037 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:44.117569 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:44.311412 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:25:44.549239 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:44.607460 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:44.610239 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:44.616396 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:45.050428 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:45.108292 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:45.111223 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:45.116803 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:45.549667 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:45.608159 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:45.611071 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:45.617369 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:46.049740 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:46.108356 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:46.111863 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:46.117184 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:46.550035 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:46.610665 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:46.616929 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:46.617766 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:46.811474 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:25:47.050811 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:47.108517 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:47.111277 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:47.118121 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:47.717272 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:47.718579 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:47.720052 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:47.720582 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:48.050550 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:48.107795 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:48.124102 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:48.125107 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:48.550864 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:48.608389 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:48.612915 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:48.617073 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:48.812147 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:25:49.050012 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:49.108020 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:49.111903 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:49.117394 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:49.553646 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:49.616003 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:49.616158 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:49.617707 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:50.050251 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:50.107666 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:50.110733 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:50.117753 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:50.551307 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:50.608304 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:50.611064 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:50.619295 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:50.812287 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:25:51.049733 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:51.108750 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:51.111336 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:51.117075 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:51.549537 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:51.607284 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:51.616857 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:51.621130 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:52.050226 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:52.107358 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:52.109850 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:52.117931 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:52.550300 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:52.607639 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:52.610451 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:52.616746 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:53.050021 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:53.107896 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:53.110487 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:53.118939 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:53.310161 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:25:53.549054 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:53.608773 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:53.611208 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:53.616121 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:54.051892 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:54.107514 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:54.110875 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:54.117146 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:54.550002 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:54.613315 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:54.614243 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:54.617980 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:55.050714 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:55.108031 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:55.114500 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:55.116547 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:55.311779 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:25:55.549866 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:55.607101 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:55.609799 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:55.616946 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:56.050001 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:56.107340 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:56.110621 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:56.116795 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:56.549343 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:56.606883 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:56.611566 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:56.616628 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:57.049643 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:57.107787 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:57.110720 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:57.116984 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:57.552490 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:57.608047 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:57.610865 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:57.617100 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:57.811362 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:25:58.050515 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:58.107939 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:58.111269 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:58.116768 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:58.550559 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:58.607094 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:58.610875 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:58.624472 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:59.050561 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:59.107350 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:59.112122 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:59.116553 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:59.550259 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:25:59.607876 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:25:59.611375 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:25:59.616611 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:25:59.813733 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:26:00.050436 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:00.107324 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:00.110757 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:00.116870 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:00.551058 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:00.607390 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:00.611966 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:01.121735 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:01.121924 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:01.126130 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:01.126983 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:01.129553 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:01.550366 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:01.607513 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:01.613604 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:01.617595 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:02.050994 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:02.107780 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:02.112490 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:02.116676 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:02.311086 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:26:02.550413 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:02.607651 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:02.611082 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:02.616850 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:03.057400 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:03.112385 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:03.120791 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:03.123378 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:03.564652 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:03.608348 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:03.613237 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:03.617465 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:04.049337 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:04.108077 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:04.111136 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:04.116450 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:04.312631 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:26:04.550826 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:04.607574 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:04.610189 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:04.616596 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:05.049987 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:05.107654 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:05.112022 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:05.116491 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:05.558654 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:05.610678 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:05.612773 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:05.616256 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:06.050837 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:06.108894 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:06.116558 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:06.117868 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:06.318490 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:26:06.552708 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:06.607804 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:06.611009 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:06.616728 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:07.050266 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:07.108163 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:07.110480 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:07.116995 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:07.549887 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:07.607248 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:07.609801 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:07.616886 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:08.050013 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:08.107356 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:08.110936 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:08.117445 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:08.549795 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:08.608572 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:08.612546 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:08.618812 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:08.809645 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:26:09.050140 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:09.108213 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:09.110707 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:09.126656 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:09.627655 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:09.628075 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:09.629523 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:09.632992 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:10.052780 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:10.107687 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:10.115155 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:10.117087 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:10.559545 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:10.618077 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:10.618827 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:10.624128 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:10.809946 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:26:11.051024 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:11.107444 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:11.119674 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:11.129911 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:11.551452 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:11.608709 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:11.612387 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:11.618669 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:12.051203 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:12.108483 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:12.113638 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:12.117352 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:12.550419 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:12.606898 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:12.615006 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:12.622892 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:12.810639 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:26:13.050044 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:13.107552 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:13.110151 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:13.116169 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:13.549565 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:13.607597 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:13.610397 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:13.616616 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:14.075866 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:14.107785 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:14.110642 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:14.116976 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:14.550402 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:14.607903 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:14.624896 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:14.625760 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:14.811698 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:26:15.050201 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:15.107696 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:15.110185 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I0603 12:26:15.116396 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:15.549935 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:15.608025 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:15.613716 1086826 kapi.go:107] duration metric: took 44.009864907s to wait for kubernetes.io/minikube-addons=registry ...
I0603 12:26:15.616045 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:16.050623 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:16.107997 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:16.118286 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:16.550566 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:16.607168 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:16.617337 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:17.049985 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:17.111754 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:17.117847 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:17.310623 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:26:17.550155 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:17.607335 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:17.617935 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:18.050331 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:18.107929 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:18.117159 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:18.549743 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:18.608683 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:18.616912 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:19.051032 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:19.107395 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:19.117733 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:19.312689 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:26:19.551709 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:19.607931 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:19.618005 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:20.056111 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:20.120487 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:20.123371 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:20.549077 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:20.607149 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:20.617546 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:21.049157 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:21.107033 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:21.117469 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:21.553377 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:21.607131 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:21.617199 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:21.811213 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:26:22.049945 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:22.107589 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:22.117235 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:22.550601 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:22.748260 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:22.748410 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:23.051046 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:23.107427 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:23.116770 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:23.552644 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:23.607528 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:23.616536 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:23.815297 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:26:24.049690 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:24.107818 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:24.116850 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:24.549590 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:24.607702 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:24.616744 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:25.051117 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:25.110714 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:25.118175 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:25.549997 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:25.607989 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:25.616932 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:26.049772 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:26.107839 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:26.117258 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:26.313163 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:26:26.550215 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:26.609188 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:26.617513 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:27.050025 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:27.110354 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:27.121036 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:27.552177 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:27.609081 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:27.617129 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:28.063267 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:28.111656 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:28.140408 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:28.554049 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:28.607648 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:28.617599 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:28.815776 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:26:29.050702 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:29.108722 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:29.117641 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:29.549232 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:29.612125 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:29.619247 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:30.050328 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:30.107579 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:30.116842 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:30.549987 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:30.607456 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:30.617024 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:31.049548 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:31.107580 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:31.117334 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:31.310166 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:26:31.549604 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:31.610474 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:31.619704 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:32.049274 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:32.107173 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:32.117440 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:32.549145 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:32.607102 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:32.617211 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:33.050797 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:33.107940 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:33.117288 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:33.314001 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:26:33.553058 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:33.607307 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:33.625683 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:34.051432 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:34.108409 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:34.119806 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:34.549419 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:34.608318 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:34.618266 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:35.049502 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I0603 12:26:35.107005 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:35.116950 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:35.549499 1086826 kapi.go:107] duration metric: took 1m2.005422319s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I0603 12:26:35.607675 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:35.616842 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:35.810331 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:26:36.108106 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:36.117479 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:36.609257 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:36.617584 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:37.108854 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:37.116821 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:37.792868 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:37.793314 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:37.811615 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:26:38.107926 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:38.117239 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:38.607378 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:38.616370 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:39.107642 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:39.117063 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:39.609297 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:39.616591 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:40.107745 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:40.117791 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:40.319439 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:26:40.986440 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:41.000146 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:41.108887 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:41.119076 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:41.607823 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:41.617096 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:42.108419 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:42.117525 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:42.608967 1086826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I0603 12:26:42.617643 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:42.811899 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:26:43.108162 1086826 kapi.go:107] duration metric: took 1m11.5078006s to wait for app.kubernetes.io/name=ingress-nginx ...
I0603 12:26:43.117460 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:43.617102 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:44.306990 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:44.618442 1086826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I0603 12:26:45.117494 1086826 kapi.go:107] duration metric: took 1m9.503943568s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I0603 12:26:45.119711 1086826 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-699562 cluster.
I0603 12:26:45.121144 1086826 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I0603 12:26:45.122532 1086826 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I0603 12:26:45.123925 1086826 out.go:177] * Enabled addons: metrics-server, storage-provisioner, ingress-dns, nvidia-device-plugin, cloud-spanner, inspektor-gadget, helm-tiller, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I0603 12:26:45.125165 1086826 addons.go:510] duration metric: took 1m22.085788168s for enable addons: enabled=[metrics-server storage-provisioner ingress-dns nvidia-device-plugin cloud-spanner inspektor-gadget helm-tiller yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I0603 12:26:45.311068 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:26:47.318344 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:26:49.810186 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:26:51.811646 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:26:54.311268 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:26:56.311669 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:26:58.311958 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:27:00.811116 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:27:02.812103 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:27:05.311540 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:27:07.811048 1086826 pod_ready.go:102] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"False"
I0603 12:27:09.811356 1086826 pod_ready.go:92] pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace has status "Ready":"True"
I0603 12:27:09.811384 1086826 pod_ready.go:81] duration metric: took 1m37.006778734s for pod "metrics-server-c59844bb4-pl8qk" in "kube-system" namespace to be "Ready" ...
I0603 12:27:09.811395 1086826 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-2sw5z" in "kube-system" namespace to be "Ready" ...
I0603 12:27:09.817624 1086826 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-2sw5z" in "kube-system" namespace has status "Ready":"True"
I0603 12:27:09.817647 1086826 pod_ready.go:81] duration metric: took 6.245626ms for pod "nvidia-device-plugin-daemonset-2sw5z" in "kube-system" namespace to be "Ready" ...
I0603 12:27:09.817666 1086826 pod_ready.go:38] duration metric: took 1m38.19943696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0603 12:27:09.817688 1086826 api_server.go:52] waiting for apiserver process to appear ...
I0603 12:27:09.817740 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I0603 12:27:09.817800 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0603 12:27:09.866770 1086826 cri.go:89] found id: "ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4"
I0603 12:27:09.866803 1086826 cri.go:89] found id: ""
I0603 12:27:09.866814 1086826 logs.go:276] 1 containers: [ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4]
I0603 12:27:09.866881 1086826 ssh_runner.go:195] Run: which crictl
I0603 12:27:09.871934 1086826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I0603 12:27:09.872023 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0603 12:27:09.912368 1086826 cri.go:89] found id: "0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514"
I0603 12:27:09.912393 1086826 cri.go:89] found id: ""
I0603 12:27:09.912402 1086826 logs.go:276] 1 containers: [0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514]
I0603 12:27:09.912466 1086826 ssh_runner.go:195] Run: which crictl
I0603 12:27:09.917195 1086826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I0603 12:27:09.917259 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0603 12:27:09.966345 1086826 cri.go:89] found id: "35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3"
I0603 12:27:09.966367 1086826 cri.go:89] found id: ""
I0603 12:27:09.966376 1086826 logs.go:276] 1 containers: [35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3]
I0603 12:27:09.966437 1086826 ssh_runner.go:195] Run: which crictl
I0603 12:27:09.970794 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I0603 12:27:09.970861 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0603 12:27:10.010220 1086826 cri.go:89] found id: "92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b"
I0603 12:27:10.010243 1086826 cri.go:89] found id: ""
I0603 12:27:10.010252 1086826 logs.go:276] 1 containers: [92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b]
I0603 12:27:10.010307 1086826 ssh_runner.go:195] Run: which crictl
I0603 12:27:10.014883 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I0603 12:27:10.014938 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0603 12:27:10.056002 1086826 cri.go:89] found id: "6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e"
I0603 12:27:10.056034 1086826 cri.go:89] found id: ""
I0603 12:27:10.056046 1086826 logs.go:276] 1 containers: [6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e]
I0603 12:27:10.056117 1086826 ssh_runner.go:195] Run: which crictl
I0603 12:27:10.060632 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I0603 12:27:10.060698 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0603 12:27:10.098738 1086826 cri.go:89] found id: "5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8"
I0603 12:27:10.098762 1086826 cri.go:89] found id: ""
I0603 12:27:10.098770 1086826 logs.go:276] 1 containers: [5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8]
I0603 12:27:10.098819 1086826 ssh_runner.go:195] Run: which crictl
I0603 12:27:10.103199 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
I0603 12:27:10.103278 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0603 12:27:10.152873 1086826 cri.go:89] found id: ""
I0603 12:27:10.152908 1086826 logs.go:276] 0 containers: []
W0603 12:27:10.152919 1086826 logs.go:278] No container was found matching "kindnet"
I0603 12:27:10.152934 1086826 logs.go:123] Gathering logs for CRI-O ...
I0603 12:27:10.152953 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
I0603 12:27:11.259298 1086826 logs.go:123] Gathering logs for container status ...
I0603 12:27:11.259359 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0603 12:27:11.306971 1086826 logs.go:123] Gathering logs for kubelet ...
I0603 12:27:11.307009 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0603 12:27:11.359963 1086826 logs.go:138] Found kubelet problem: Jun 03 12:25:29 addons-699562 kubelet[1268]: W0603 12:25:29.345897 1268 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
W0603 12:27:11.360135 1086826 logs.go:138] Found kubelet problem: Jun 03 12:25:29 addons-699562 kubelet[1268]: E0603 12:25:29.346036 1268 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
I0603 12:27:11.392921 1086826 logs.go:123] Gathering logs for etcd [0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514] ...
I0603 12:27:11.392964 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514"
I0603 12:27:11.453617 1086826 logs.go:123] Gathering logs for kube-scheduler [92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b] ...
I0603 12:27:11.453666 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b"
I0603 12:27:11.502128 1086826 logs.go:123] Gathering logs for kube-controller-manager [5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8] ...
I0603 12:27:11.502170 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8"
I0603 12:27:11.563524 1086826 logs.go:123] Gathering logs for kube-proxy [6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e] ...
I0603 12:27:11.563569 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e"
I0603 12:27:11.600560 1086826 logs.go:123] Gathering logs for dmesg ...
I0603 12:27:11.600598 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0603 12:27:11.616410 1086826 logs.go:123] Gathering logs for describe nodes ...
I0603 12:27:11.616449 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0603 12:27:11.743210 1086826 logs.go:123] Gathering logs for kube-apiserver [ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4] ...
I0603 12:27:11.743245 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4"
I0603 12:27:11.790432 1086826 logs.go:123] Gathering logs for coredns [35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3] ...
I0603 12:27:11.790480 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3"
I0603 12:27:11.832586 1086826 out.go:304] Setting ErrFile to fd 2...
I0603 12:27:11.832622 1086826 out.go:338] TERM=,COLORTERM=, which probably does not support color
W0603 12:27:11.832717 1086826 out.go:239] X Problems detected in kubelet:
W0603 12:27:11.832734 1086826 out.go:239] Jun 03 12:25:29 addons-699562 kubelet[1268]: W0603 12:25:29.345897 1268 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
W0603 12:27:11.832747 1086826 out.go:239] Jun 03 12:25:29 addons-699562 kubelet[1268]: E0603 12:25:29.346036 1268 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
I0603 12:27:11.832762 1086826 out.go:304] Setting ErrFile to fd 2...
I0603 12:27:11.832772 1086826 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 12:27:21.834706 1086826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0603 12:27:21.854545 1086826 api_server.go:72] duration metric: took 1m58.8152147s to wait for apiserver process to appear ...
I0603 12:27:21.854577 1086826 api_server.go:88] waiting for apiserver healthz status ...
I0603 12:27:21.854630 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I0603 12:27:21.854692 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0603 12:27:21.895375 1086826 cri.go:89] found id: "ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4"
I0603 12:27:21.895398 1086826 cri.go:89] found id: ""
I0603 12:27:21.895406 1086826 logs.go:276] 1 containers: [ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4]
I0603 12:27:21.895460 1086826 ssh_runner.go:195] Run: which crictl
I0603 12:27:21.900015 1086826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I0603 12:27:21.900067 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0603 12:27:21.943623 1086826 cri.go:89] found id: "0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514"
I0603 12:27:21.943658 1086826 cri.go:89] found id: ""
I0603 12:27:21.943667 1086826 logs.go:276] 1 containers: [0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514]
I0603 12:27:21.943731 1086826 ssh_runner.go:195] Run: which crictl
I0603 12:27:21.948627 1086826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I0603 12:27:21.948694 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0603 12:27:21.992699 1086826 cri.go:89] found id: "35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3"
I0603 12:27:21.992725 1086826 cri.go:89] found id: ""
I0603 12:27:21.992735 1086826 logs.go:276] 1 containers: [35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3]
I0603 12:27:21.992800 1086826 ssh_runner.go:195] Run: which crictl
I0603 12:27:21.998808 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I0603 12:27:21.998885 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0603 12:27:22.044523 1086826 cri.go:89] found id: "92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b"
I0603 12:27:22.044551 1086826 cri.go:89] found id: ""
I0603 12:27:22.044562 1086826 logs.go:276] 1 containers: [92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b]
I0603 12:27:22.044631 1086826 ssh_runner.go:195] Run: which crictl
I0603 12:27:22.049328 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I0603 12:27:22.049401 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0603 12:27:22.091373 1086826 cri.go:89] found id: "6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e"
I0603 12:27:22.091398 1086826 cri.go:89] found id: ""
I0603 12:27:22.091406 1086826 logs.go:276] 1 containers: [6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e]
I0603 12:27:22.091468 1086826 ssh_runner.go:195] Run: which crictl
I0603 12:27:22.095823 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I0603 12:27:22.095878 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0603 12:27:22.134585 1086826 cri.go:89] found id: "5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8"
I0603 12:27:22.134616 1086826 cri.go:89] found id: ""
I0603 12:27:22.134627 1086826 logs.go:276] 1 containers: [5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8]
I0603 12:27:22.134682 1086826 ssh_runner.go:195] Run: which crictl
I0603 12:27:22.138852 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
I0603 12:27:22.138911 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0603 12:27:22.183843 1086826 cri.go:89] found id: ""
I0603 12:27:22.183868 1086826 logs.go:276] 0 containers: []
W0603 12:27:22.183876 1086826 logs.go:278] No container was found matching "kindnet"
I0603 12:27:22.183886 1086826 logs.go:123] Gathering logs for dmesg ...
I0603 12:27:22.183900 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0603 12:27:22.199319 1086826 logs.go:123] Gathering logs for kube-apiserver [ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4] ...
I0603 12:27:22.199362 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4"
I0603 12:27:22.255816 1086826 logs.go:123] Gathering logs for etcd [0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514] ...
I0603 12:27:22.255848 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514"
I0603 12:27:22.309345 1086826 logs.go:123] Gathering logs for kube-proxy [6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e] ...
I0603 12:27:22.309387 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e"
I0603 12:27:22.349280 1086826 logs.go:123] Gathering logs for kube-controller-manager [5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8] ...
I0603 12:27:22.349321 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8"
I0603 12:27:22.418782 1086826 logs.go:123] Gathering logs for kubelet ...
I0603 12:27:22.418821 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0603 12:27:22.478644 1086826 logs.go:138] Found kubelet problem: Jun 03 12:25:29 addons-699562 kubelet[1268]: W0603 12:25:29.345897 1268 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
W0603 12:27:22.478888 1086826 logs.go:138] Found kubelet problem: Jun 03 12:25:29 addons-699562 kubelet[1268]: E0603 12:25:29.346036 1268 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
I0603 12:27:22.516116 1086826 logs.go:123] Gathering logs for describe nodes ...
I0603 12:27:22.516161 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0603 12:27:22.649308 1086826 logs.go:123] Gathering logs for coredns [35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3] ...
I0603 12:27:22.649341 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3"
I0603 12:27:22.689734 1086826 logs.go:123] Gathering logs for kube-scheduler [92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b] ...
I0603 12:27:22.689785 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b"
I0603 12:27:22.733915 1086826 logs.go:123] Gathering logs for CRI-O ...
I0603 12:27:22.733952 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
I0603 12:27:23.479507 1086826 logs.go:123] Gathering logs for container status ...
I0603 12:27:23.479570 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0603 12:27:23.533200 1086826 out.go:304] Setting ErrFile to fd 2...
I0603 12:27:23.533234 1086826 out.go:338] TERM=,COLORTERM=, which probably does not support color
W0603 12:27:23.533305 1086826 out.go:239] X Problems detected in kubelet:
W0603 12:27:23.533317 1086826 out.go:239] Jun 03 12:25:29 addons-699562 kubelet[1268]: W0603 12:25:29.345897 1268 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
W0603 12:27:23.533324 1086826 out.go:239] Jun 03 12:25:29 addons-699562 kubelet[1268]: E0603 12:25:29.346036 1268 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
I0603 12:27:23.533336 1086826 out.go:304] Setting ErrFile to fd 2...
I0603 12:27:23.533346 1086826 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 12:27:33.534751 1086826 api_server.go:253] Checking apiserver healthz at https://192.168.39.241:8443/healthz ...
I0603 12:27:33.539341 1086826 api_server.go:279] https://192.168.39.241:8443/healthz returned 200:
ok
I0603 12:27:33.540654 1086826 api_server.go:141] control plane version: v1.30.1
I0603 12:27:33.540682 1086826 api_server.go:131] duration metric: took 11.686097199s to wait for apiserver health ...
I0603 12:27:33.540692 1086826 system_pods.go:43] waiting for kube-system pods to appear ...
I0603 12:27:33.540725 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I0603 12:27:33.540790 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0603 12:27:33.580781 1086826 cri.go:89] found id: "ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4"
I0603 12:27:33.580805 1086826 cri.go:89] found id: ""
I0603 12:27:33.580815 1086826 logs.go:276] 1 containers: [ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4]
I0603 12:27:33.580884 1086826 ssh_runner.go:195] Run: which crictl
I0603 12:27:33.585293 1086826 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I0603 12:27:33.585356 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0603 12:27:33.624776 1086826 cri.go:89] found id: "0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514"
I0603 12:27:33.624799 1086826 cri.go:89] found id: ""
I0603 12:27:33.624807 1086826 logs.go:276] 1 containers: [0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514]
I0603 12:27:33.624855 1086826 ssh_runner.go:195] Run: which crictl
I0603 12:27:33.629338 1086826 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I0603 12:27:33.629437 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0603 12:27:33.676960 1086826 cri.go:89] found id: "35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3"
I0603 12:27:33.676995 1086826 cri.go:89] found id: ""
I0603 12:27:33.677007 1086826 logs.go:276] 1 containers: [35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3]
I0603 12:27:33.677078 1086826 ssh_runner.go:195] Run: which crictl
I0603 12:27:33.681551 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I0603 12:27:33.681615 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0603 12:27:33.719279 1086826 cri.go:89] found id: "92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b"
I0603 12:27:33.719302 1086826 cri.go:89] found id: ""
I0603 12:27:33.719311 1086826 logs.go:276] 1 containers: [92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b]
I0603 12:27:33.719384 1086826 ssh_runner.go:195] Run: which crictl
I0603 12:27:33.723688 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I0603 12:27:33.723743 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0603 12:27:33.760181 1086826 cri.go:89] found id: "6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e"
I0603 12:27:33.760210 1086826 cri.go:89] found id: ""
I0603 12:27:33.760221 1086826 logs.go:276] 1 containers: [6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e]
I0603 12:27:33.760283 1086826 ssh_runner.go:195] Run: which crictl
I0603 12:27:33.764788 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I0603 12:27:33.764868 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0603 12:27:33.805979 1086826 cri.go:89] found id: "5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8"
I0603 12:27:33.806017 1086826 cri.go:89] found id: ""
I0603 12:27:33.806030 1086826 logs.go:276] 1 containers: [5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8]
I0603 12:27:33.806117 1086826 ssh_runner.go:195] Run: which crictl
I0603 12:27:33.810640 1086826 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
I0603 12:27:33.810719 1086826 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0603 12:27:33.860436 1086826 cri.go:89] found id: ""
I0603 12:27:33.860478 1086826 logs.go:276] 0 containers: []
W0603 12:27:33.860490 1086826 logs.go:278] No container was found matching "kindnet"
I0603 12:27:33.860503 1086826 logs.go:123] Gathering logs for kubelet ...
I0603 12:27:33.860523 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0603 12:27:33.912867 1086826 logs.go:138] Found kubelet problem: Jun 03 12:25:29 addons-699562 kubelet[1268]: W0603 12:25:29.345897 1268 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
W0603 12:27:33.913116 1086826 logs.go:138] Found kubelet problem: Jun 03 12:25:29 addons-699562 kubelet[1268]: E0603 12:25:29.346036 1268 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
I0603 12:27:33.946641 1086826 logs.go:123] Gathering logs for kube-apiserver [ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4] ...
I0603 12:27:33.946688 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4"
I0603 12:27:33.995447 1086826 logs.go:123] Gathering logs for etcd [0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514] ...
I0603 12:27:33.995490 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514"
I0603 12:27:34.053247 1086826 logs.go:123] Gathering logs for kube-proxy [6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e] ...
I0603 12:27:34.053293 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e"
I0603 12:27:34.092640 1086826 logs.go:123] Gathering logs for kube-controller-manager [5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8] ...
I0603 12:27:34.092671 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8"
I0603 12:27:34.161946 1086826 logs.go:123] Gathering logs for CRI-O ...
I0603 12:27:34.161991 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
I0603 12:27:35.152385 1086826 logs.go:123] Gathering logs for container status ...
I0603 12:27:35.152441 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0603 12:27:35.199230 1086826 logs.go:123] Gathering logs for dmesg ...
I0603 12:27:35.199272 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0603 12:27:35.215226 1086826 logs.go:123] Gathering logs for describe nodes ...
I0603 12:27:35.215263 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0603 12:27:35.337967 1086826 logs.go:123] Gathering logs for coredns [35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3] ...
I0603 12:27:35.338010 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3"
I0603 12:27:35.380440 1086826 logs.go:123] Gathering logs for kube-scheduler [92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b] ...
I0603 12:27:35.380475 1086826 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b"
I0603 12:27:35.427393 1086826 out.go:304] Setting ErrFile to fd 2...
I0603 12:27:35.427426 1086826 out.go:338] TERM=,COLORTERM=, which probably does not support color
W0603 12:27:35.427490 1086826 out.go:239] X Problems detected in kubelet:
W0603 12:27:35.427499 1086826 out.go:239] Jun 03 12:25:29 addons-699562 kubelet[1268]: W0603 12:25:29.345897 1268 reflector.go:547] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
W0603 12:27:35.427506 1086826 out.go:239] Jun 03 12:25:29 addons-699562 kubelet[1268]: E0603 12:25:29.346036 1268 reflector.go:150] object-"yakd-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-699562" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-699562' and this object
I0603 12:27:35.427513 1086826 out.go:304] Setting ErrFile to fd 2...
I0603 12:27:35.427520 1086826 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0603 12:27:45.442234 1086826 system_pods.go:59] 18 kube-system pods found
I0603 12:27:45.442278 1086826 system_pods.go:61] "coredns-7db6d8ff4d-hmhdl" [c3cfe166-99f3-4ac9-9905-8be76bcb511d] Running
I0603 12:27:45.442285 1086826 system_pods.go:61] "csi-hostpath-attacher-0" [c6efaa50-400a-4e2d-9610-290b08ca0e27] Running
I0603 12:27:45.442291 1086826 system_pods.go:61] "csi-hostpath-resizer-0" [d102455a-acf1-4067-b512-3e7d24676733] Running
I0603 12:27:45.442296 1086826 system_pods.go:61] "csi-hostpathplugin-ldcdv" [db932b0d-726d-4b8d-b47c-dcbc1657a70d] Running
I0603 12:27:45.442300 1086826 system_pods.go:61] "etcd-addons-699562" [90cdaf3f-ae75-439a-84f1-78cba28a6085] Running
I0603 12:27:45.442305 1086826 system_pods.go:61] "kube-apiserver-addons-699562" [08e077e7-849e-40e9-bbb8-d3d5857a87bb] Running
I0603 12:27:45.442309 1086826 system_pods.go:61] "kube-controller-manager-addons-699562" [1fb0b7db-5179-43de-bdea-0a9c8666d1dd] Running
I0603 12:27:45.442313 1086826 system_pods.go:61] "kube-ingress-dns-minikube" [21a1c096-2479-4d10-864a-8b202b08a284] Running
I0603 12:27:45.442318 1086826 system_pods.go:61] "kube-proxy-6ssr8" [609d1553-86b5-46ea-b503-bdfd9f291571] Running
I0603 12:27:45.442323 1086826 system_pods.go:61] "kube-scheduler-addons-699562" [d5748ac9-a1c8-496a-aa0f-8a75c6a8b12c] Running
I0603 12:27:45.442327 1086826 system_pods.go:61] "metrics-server-c59844bb4-pl8qk" [26f4580a-9514-47c0-aa22-11c454eaca32] Running
I0603 12:27:45.442332 1086826 system_pods.go:61] "nvidia-device-plugin-daemonset-2sw5z" [3ad1866a-b3d5-4783-b2dd-557082180d8f] Running
I0603 12:27:45.442337 1086826 system_pods.go:61] "registry-jrrh7" [af432feb-b699-477a-8cd5-ff109071d13d] Running
I0603 12:27:45.442342 1086826 system_pods.go:61] "registry-proxy-n8265" [343bbd2c-1a4b-4796-8401-ebd3686c0a61] Running
I0603 12:27:45.442348 1086826 system_pods.go:61] "snapshot-controller-745499f584-dk5sk" [e74e33d1-7eaf-46d7-bcb2-2a088a1687bd] Running
I0603 12:27:45.442356 1086826 system_pods.go:61] "snapshot-controller-745499f584-nkg59" [dd8cffdf-f15c-405a-95d3-fa13eb7a4908] Running
I0603 12:27:45.442361 1086826 system_pods.go:61] "storage-provisioner" [c3d92bc5-3f10-47e3-84a9-f532f14deae4] Running
I0603 12:27:45.442370 1086826 system_pods.go:61] "tiller-deploy-6677d64bcd-k4tt8" [0ecadef4-5251-4d11-a39c-77a196200334] Running
I0603 12:27:45.442378 1086826 system_pods.go:74] duration metric: took 11.901678581s to wait for pod list to return data ...
I0603 12:27:45.442391 1086826 default_sa.go:34] waiting for default service account to be created ...
I0603 12:27:45.444510 1086826 default_sa.go:45] found service account: "default"
I0603 12:27:45.444530 1086826 default_sa.go:55] duration metric: took 2.131961ms for default service account to be created ...
I0603 12:27:45.444537 1086826 system_pods.go:116] waiting for k8s-apps to be running ...
I0603 12:27:45.453736 1086826 system_pods.go:86] 18 kube-system pods found
I0603 12:27:45.453760 1086826 system_pods.go:89] "coredns-7db6d8ff4d-hmhdl" [c3cfe166-99f3-4ac9-9905-8be76bcb511d] Running
I0603 12:27:45.453766 1086826 system_pods.go:89] "csi-hostpath-attacher-0" [c6efaa50-400a-4e2d-9610-290b08ca0e27] Running
I0603 12:27:45.453770 1086826 system_pods.go:89] "csi-hostpath-resizer-0" [d102455a-acf1-4067-b512-3e7d24676733] Running
I0603 12:27:45.453774 1086826 system_pods.go:89] "csi-hostpathplugin-ldcdv" [db932b0d-726d-4b8d-b47c-dcbc1657a70d] Running
I0603 12:27:45.453778 1086826 system_pods.go:89] "etcd-addons-699562" [90cdaf3f-ae75-439a-84f1-78cba28a6085] Running
I0603 12:27:45.453782 1086826 system_pods.go:89] "kube-apiserver-addons-699562" [08e077e7-849e-40e9-bbb8-d3d5857a87bb] Running
I0603 12:27:45.453786 1086826 system_pods.go:89] "kube-controller-manager-addons-699562" [1fb0b7db-5179-43de-bdea-0a9c8666d1dd] Running
I0603 12:27:45.453791 1086826 system_pods.go:89] "kube-ingress-dns-minikube" [21a1c096-2479-4d10-864a-8b202b08a284] Running
I0603 12:27:45.453795 1086826 system_pods.go:89] "kube-proxy-6ssr8" [609d1553-86b5-46ea-b503-bdfd9f291571] Running
I0603 12:27:45.453799 1086826 system_pods.go:89] "kube-scheduler-addons-699562" [d5748ac9-a1c8-496a-aa0f-8a75c6a8b12c] Running
I0603 12:27:45.453805 1086826 system_pods.go:89] "metrics-server-c59844bb4-pl8qk" [26f4580a-9514-47c0-aa22-11c454eaca32] Running
I0603 12:27:45.453809 1086826 system_pods.go:89] "nvidia-device-plugin-daemonset-2sw5z" [3ad1866a-b3d5-4783-b2dd-557082180d8f] Running
I0603 12:27:45.453814 1086826 system_pods.go:89] "registry-jrrh7" [af432feb-b699-477a-8cd5-ff109071d13d] Running
I0603 12:27:45.453818 1086826 system_pods.go:89] "registry-proxy-n8265" [343bbd2c-1a4b-4796-8401-ebd3686c0a61] Running
I0603 12:27:45.453821 1086826 system_pods.go:89] "snapshot-controller-745499f584-dk5sk" [e74e33d1-7eaf-46d7-bcb2-2a088a1687bd] Running
I0603 12:27:45.453828 1086826 system_pods.go:89] "snapshot-controller-745499f584-nkg59" [dd8cffdf-f15c-405a-95d3-fa13eb7a4908] Running
I0603 12:27:45.453832 1086826 system_pods.go:89] "storage-provisioner" [c3d92bc5-3f10-47e3-84a9-f532f14deae4] Running
I0603 12:27:45.453835 1086826 system_pods.go:89] "tiller-deploy-6677d64bcd-k4tt8" [0ecadef4-5251-4d11-a39c-77a196200334] Running
I0603 12:27:45.453842 1086826 system_pods.go:126] duration metric: took 9.30001ms to wait for k8s-apps to be running ...
I0603 12:27:45.453849 1086826 system_svc.go:44] waiting for kubelet service to be running ....
I0603 12:27:45.453893 1086826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I0603 12:27:45.472770 1086826 system_svc.go:56] duration metric: took 18.912332ms WaitForService to wait for kubelet
I0603 12:27:45.472793 1086826 kubeadm.go:576] duration metric: took 2m22.433473354s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0603 12:27:45.472813 1086826 node_conditions.go:102] verifying NodePressure condition ...
I0603 12:27:45.476327 1086826 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I0603 12:27:45.476351 1086826 node_conditions.go:123] node cpu capacity is 2
I0603 12:27:45.476365 1086826 node_conditions.go:105] duration metric: took 3.54603ms to run NodePressure ...
I0603 12:27:45.476377 1086826 start.go:240] waiting for startup goroutines ...
I0603 12:27:45.476384 1086826 start.go:245] waiting for cluster config update ...
I0603 12:27:45.476401 1086826 start.go:254] writing updated cluster config ...
I0603 12:27:45.476702 1086826 ssh_runner.go:195] Run: rm -f paused
I0603 12:27:45.526801 1086826 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
I0603 12:27:45.529608 1086826 out.go:177] * Done! kubectl is now configured to use "addons-699562" cluster and "default" namespace by default
==> CRI-O <==
Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.528938196Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417833528913422,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584738,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=121ccec4-380f-46bd-9fef-9b30dbc18861 name=/runtime.v1.ImageService/ImageFsInfo
Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.529569461Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d7c6a863-792e-4035-a4c3-6d145ae50a33 name=/runtime.v1.RuntimeService/ListContainers
Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.529695724Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d7c6a863-792e-4035-a4c3-6d145ae50a33 name=/runtime.v1.RuntimeService/ListContainers
Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.530102065Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d642bc3271163ac07b67d5d065df03b7f8ec966349c684f21b1bd59704f0e69,PodSandboxId:2cd7a7a28e0a544856b8d5555606d91ef93f12a0d206dd883ea24422a0b3358b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1717417824993167174,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-79c22,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 084158b3-1687-4f4c-b741-cbab7ca11858,},Annotations:map[string]string{io.kubernetes.container.hash: 29cd3655,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a173411215156f8b18d5fbf4880f8e8fdde156ec2a9e410913aa0c571553461a,PodSandboxId:f58876a06d48db0d426b93948a20a6d66b18f312f72772da56715a105e6fb466,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1717417684144009266,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22eac9e0-47f1-46a1-9745-87ca515de64e,},Annotations:map[string]string{io.kubern
etes.container.hash: d250beef,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bf5932194780bd3cb97f9edc1a375d5a81fda4f3a462fe7b477ade5bb3d2ef1,PodSandboxId:83a0e5827ce1a87f9a28b80c3e8aef138aa1aafbe0be947094b5660af09e3673,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1717417672655915191,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-tpgtj,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: c02f3cb7-dd75-4d83-89fe-082ca6c80805,},Annotations:map[string]string{io.kubernetes.container.hash: 22246b3d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f787a95dc6ea2d78d819bc9e3ce31d217271f40af9c989319ddc466faa542c4,PodSandboxId:266b9c9ff3c4be3b23861b9133bb076fc65c831d3b9d14a733790d25dd14cecb,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1717417604379198722,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-vq6sn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d4773645-1a91-48bc-a27e-61822e3eb944,},Annotations:map[string]string{io.kubernetes.container.hash: 7c46e196,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d13bd5e73c30ae4165f06b2185319aeacf35e7ffaa0b56363eb04137f5f6968,PodSandboxId:b07a28e9eef859d799892174459ac6d06f7f796f90c1d805586d8d4438fd0f2d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1717417585093806521,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rl49z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 40f21b83-9dbc-4bc9-b23d-5c8c1aa04d70,},Annotations:map[string]string{io.kubernetes.container.hash: 7e9d6aae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78062314a87041d03dc6e5e7132662b0ff33b7d83c2f19e08843de0216f60c0f,PodSandboxId:52827ed278e4e8c5717896922ec331ff8f986d672d90a707092e16fb2a1356a5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1717417584963806225,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h7kn8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 676ead8d-a891-4dac-8cc5-992c426fcdc9,},Annotations:map[string]string{io.kubernetes.container.hash: d1d99a3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08062fd585905d420205076b1d3f748399b18214282181c986eb4e0dcdcb686f,PodSandboxId:3c836ea529a741dc02ac68dcdc31ac8e0d959d76d24d1fa21e9a52f7f15d92d9,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1717417579443576098,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-th7qj,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: cb66a0b3-53cb-493e-8010-d545cc1dc5b8,},Annotations:map[string]string{io.kubernetes.container.hash: 32e0c41d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff24eb8563b0cdf092e0477748bf3b7abddc2958ee31f2fce5b90d2987e09ab0,PodSandboxId:b76bfaf676bbeb8b006b7032d5ac92a16463266e80b93e0e52cda1863a458b9c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd
96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1717417569721394616,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-2trqm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1f4740f5-01f4-413e-8b79-311c67526d69,},Annotations:map[string]string{io.kubernetes.container.hash: df90b885,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071b33296d63e35493453ab9868ec545daa42d19a1436dbc8c4e22d7983162fa,PodSandboxId:c808b7e546b606d9b24586ce1db0e971c4e35cf5bc1eae84ffb5fa24b44cfbf6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-ser
ver/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1717417565412297941,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-pl8qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f4580a-9514-47c0-aa22-11c454eaca32,},Annotations:map[string]string{io.kubernetes.container.hash: 382214a7,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:435447885e6a3c602fae7bd77b056971c54bbc1ada0aa5e9e9f634db78fc7c0a,PodSandboxId:543fde334d4b530434d593b1fb43a32cd0a
a6dd937131e82b4db8d5f79083144,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_EXITED,CreatedAt:1717417539747054220,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21a1c096-2479-4d10-864a-8b202b08a284,},Annotations:map[string]string{io.kubernetes.container.hash: 409d8265,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:17a9104d810266c5a8079eeaf8d0c23a2e4538617523b6b90bff538c0454bd06,PodSandboxId:81961c6a37d61c8b612f41f7f942f2b0a4c108ba966128c4208ecab42f3fe95c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717417533319293443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d92bc5-3f10-47e3-84a9-f532f14deae4,},Annotations:map[string]string{io.kubernetes.container.hash: ed5f337c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3,PodSandboxId:0bfe8f416027409f7e1eac5af8acd40c936317be61611534f1284947fa2ef9f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717417526164116856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hmhdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cfe166-99f3-4ac9-9905-8be76bcb511d,},Annotations:map[string]string{io.kubernetes.container.hash: b559e7d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e,PodSandboxId:bdb166637cc76b778ba00bf3d396efbe8bd2978f9e621874b1bb0fb2220aff46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717417523013109940,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6ssr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 609d1553-86b5-46ea-b503-bdfd9f291571,},Annotations:map[string]string{io.kubernetes.container.hash: dde3c0ec,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514,PodSandboxId:b7cc010079adda6957a2caae2c510628a876664b8dca66867c8c5a8f08ddc1c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717417503291175049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628fee2574e8e2e94faacdc70733c8af,},Annotations:map[string]string{io.kubernetes.container.hash: eee46468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8,PodSandboxId:dfa5c4cb4bc79b6324610ebb8427a1121ff9130a3754c8522018aadb5bc2e443,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717417503268941281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65e7e1f6b5fb520ef619dd246fd97035,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4,PodSandboxId:96186e4c50e5eb41ec7257ba2b4ec8474fc6064c8a935839054727baa8b306a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717417503266848986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c2e93774694bec0d9f39543e1c101b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfe1b2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b,PodSandboxId:0b594b2f837fe04b34ea1200fca819f9b4bc408fed28f0e293849d18e3e2d779,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717417503202321229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e264d67def89fa6266f980f6f77444,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d7c6a863-792e-4035-a4c3-6d145ae50a33 name=/runtime.v1.RuntimeService/ListContainers
Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.567383343Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=faa35769-6109-4f37-9e76-68c92b302a11 name=/runtime.v1.RuntimeService/Version
Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.567470068Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=faa35769-6109-4f37-9e76-68c92b302a11 name=/runtime.v1.RuntimeService/Version
Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.568582446Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f2125ce5-aa30-4bd6-8252-4cd5fc3a0d94 name=/runtime.v1.ImageService/ImageFsInfo
Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.570152328Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417833570126196,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584738,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2125ce5-aa30-4bd6-8252-4cd5fc3a0d94 name=/runtime.v1.ImageService/ImageFsInfo
Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.570773856Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d5896ab-34f8-41c7-8f10-e1d224bd05c9 name=/runtime.v1.RuntimeService/ListContainers
Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.570844543Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d5896ab-34f8-41c7-8f10-e1d224bd05c9 name=/runtime.v1.RuntimeService/ListContainers
Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.571191236Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d642bc3271163ac07b67d5d065df03b7f8ec966349c684f21b1bd59704f0e69,PodSandboxId:2cd7a7a28e0a544856b8d5555606d91ef93f12a0d206dd883ea24422a0b3358b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1717417824993167174,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-79c22,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 084158b3-1687-4f4c-b741-cbab7ca11858,},Annotations:map[string]string{io.kubernetes.container.hash: 29cd3655,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a173411215156f8b18d5fbf4880f8e8fdde156ec2a9e410913aa0c571553461a,PodSandboxId:f58876a06d48db0d426b93948a20a6d66b18f312f72772da56715a105e6fb466,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1717417684144009266,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22eac9e0-47f1-46a1-9745-87ca515de64e,},Annotations:map[string]string{io.kubern
etes.container.hash: d250beef,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bf5932194780bd3cb97f9edc1a375d5a81fda4f3a462fe7b477ade5bb3d2ef1,PodSandboxId:83a0e5827ce1a87f9a28b80c3e8aef138aa1aafbe0be947094b5660af09e3673,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1717417672655915191,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-tpgtj,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: c02f3cb7-dd75-4d83-89fe-082ca6c80805,},Annotations:map[string]string{io.kubernetes.container.hash: 22246b3d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f787a95dc6ea2d78d819bc9e3ce31d217271f40af9c989319ddc466faa542c4,PodSandboxId:266b9c9ff3c4be3b23861b9133bb076fc65c831d3b9d14a733790d25dd14cecb,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1717417604379198722,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-vq6sn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d4773645-1a91-48bc-a27e-61822e3eb944,},Annotations:map[string]string{io.kubernetes.container.hash: 7c46e196,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d13bd5e73c30ae4165f06b2185319aeacf35e7ffaa0b56363eb04137f5f6968,PodSandboxId:b07a28e9eef859d799892174459ac6d06f7f796f90c1d805586d8d4438fd0f2d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1717417585093806521,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rl49z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 40f21b83-9dbc-4bc9-b23d-5c8c1aa04d70,},Annotations:map[string]string{io.kubernetes.container.hash: 7e9d6aae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78062314a87041d03dc6e5e7132662b0ff33b7d83c2f19e08843de0216f60c0f,PodSandboxId:52827ed278e4e8c5717896922ec331ff8f986d672d90a707092e16fb2a1356a5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1717417584963806225,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h7kn8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 676ead8d-a891-4dac-8cc5-992c426fcdc9,},Annotations:map[string]string{io.kubernetes.container.hash: d1d99a3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08062fd585905d420205076b1d3f748399b18214282181c986eb4e0dcdcb686f,PodSandboxId:3c836ea529a741dc02ac68dcdc31ac8e0d959d76d24d1fa21e9a52f7f15d92d9,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1717417579443576098,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-th7qj,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: cb66a0b3-53cb-493e-8010-d545cc1dc5b8,},Annotations:map[string]string{io.kubernetes.container.hash: 32e0c41d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff24eb8563b0cdf092e0477748bf3b7abddc2958ee31f2fce5b90d2987e09ab0,PodSandboxId:b76bfaf676bbeb8b006b7032d5ac92a16463266e80b93e0e52cda1863a458b9c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd
96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1717417569721394616,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-2trqm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1f4740f5-01f4-413e-8b79-311c67526d69,},Annotations:map[string]string{io.kubernetes.container.hash: df90b885,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071b33296d63e35493453ab9868ec545daa42d19a1436dbc8c4e22d7983162fa,PodSandboxId:c808b7e546b606d9b24586ce1db0e971c4e35cf5bc1eae84ffb5fa24b44cfbf6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-ser
ver/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1717417565412297941,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-pl8qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f4580a-9514-47c0-aa22-11c454eaca32,},Annotations:map[string]string{io.kubernetes.container.hash: 382214a7,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:435447885e6a3c602fae7bd77b056971c54bbc1ada0aa5e9e9f634db78fc7c0a,PodSandboxId:543fde334d4b530434d593b1fb43a32cd0a
a6dd937131e82b4db8d5f79083144,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_EXITED,CreatedAt:1717417539747054220,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21a1c096-2479-4d10-864a-8b202b08a284,},Annotations:map[string]string{io.kubernetes.container.hash: 409d8265,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:17a9104d810266c5a8079eeaf8d0c23a2e4538617523b6b90bff538c0454bd06,PodSandboxId:81961c6a37d61c8b612f41f7f942f2b0a4c108ba966128c4208ecab42f3fe95c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717417533319293443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d92bc5-3f10-47e3-84a9-f532f14deae4,},Annotations:map[string]string{io.kubernetes.container.hash: ed5f337c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3,PodSandboxId:0bfe8f416027409f7e1eac5af8acd40c936317be61611534f1284947fa2ef9f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717417526164116856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hmhdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cfe166-99f3-4ac9-9905-8be76bcb511d,},Annotations:map[string]string{io.kubernetes.container.hash: b559e7d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e,PodSandboxId:bdb166637cc76b778ba00bf3d396efbe8bd2978f9e621874b1bb0fb2220aff46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717417523013109940,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6ssr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 609d1553-86b5-46ea-b503-bdfd9f291571,},Annotations:map[string]string{io.kubernetes.container.hash: dde3c0ec,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514,PodSandboxId:b7cc010079adda6957a2caae2c510628a876664b8dca66867c8c5a8f08ddc1c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717417503291175049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628fee2574e8e2e94faacdc70733c8af,},Annotations:map[string]string{io.kubernetes.container.hash: eee46468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8,PodSandboxId:dfa5c4cb4bc79b6324610ebb8427a1121ff9130a3754c8522018aadb5bc2e443,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717417503268941281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65e7e1f6b5fb520ef619dd246fd97035,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4,PodSandboxId:96186e4c50e5eb41ec7257ba2b4ec8474fc6064c8a935839054727baa8b306a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717417503266848986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c2e93774694bec0d9f39543e1c101b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfe1b2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b,PodSandboxId:0b594b2f837fe04b34ea1200fca819f9b4bc408fed28f0e293849d18e3e2d779,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717417503202321229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e264d67def89fa6266f980f6f77444,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d5896ab-34f8-41c7-8f10-e1d224bd05c9 name=/runtime.v1.RuntimeService/ListContainers
Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.604619003Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d791d7a5-a49c-4828-8b1a-1abda82ba5ef name=/runtime.v1.RuntimeService/Version
Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.604811155Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d791d7a5-a49c-4828-8b1a-1abda82ba5ef name=/runtime.v1.RuntimeService/Version
Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.606089887Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d49b64d9-dffb-4f75-a133-301bf33b4ea5 name=/runtime.v1.ImageService/ImageFsInfo
Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.607430682Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417833607405119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584738,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d49b64d9-dffb-4f75-a133-301bf33b4ea5 name=/runtime.v1.ImageService/ImageFsInfo
Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.608092019Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ded845f4-ad6b-4e79-8933-a2af527044f3 name=/runtime.v1.RuntimeService/ListContainers
Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.608162721Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ded845f4-ad6b-4e79-8933-a2af527044f3 name=/runtime.v1.RuntimeService/ListContainers
Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.608480061Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d642bc3271163ac07b67d5d065df03b7f8ec966349c684f21b1bd59704f0e69,PodSandboxId:2cd7a7a28e0a544856b8d5555606d91ef93f12a0d206dd883ea24422a0b3358b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1717417824993167174,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-79c22,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 084158b3-1687-4f4c-b741-cbab7ca11858,},Annotations:map[string]string{io.kubernetes.container.hash: 29cd3655,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a173411215156f8b18d5fbf4880f8e8fdde156ec2a9e410913aa0c571553461a,PodSandboxId:f58876a06d48db0d426b93948a20a6d66b18f312f72772da56715a105e6fb466,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1717417684144009266,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22eac9e0-47f1-46a1-9745-87ca515de64e,},Annotations:map[string]string{io.kubern
etes.container.hash: d250beef,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bf5932194780bd3cb97f9edc1a375d5a81fda4f3a462fe7b477ade5bb3d2ef1,PodSandboxId:83a0e5827ce1a87f9a28b80c3e8aef138aa1aafbe0be947094b5660af09e3673,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1717417672655915191,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-tpgtj,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: c02f3cb7-dd75-4d83-89fe-082ca6c80805,},Annotations:map[string]string{io.kubernetes.container.hash: 22246b3d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f787a95dc6ea2d78d819bc9e3ce31d217271f40af9c989319ddc466faa542c4,PodSandboxId:266b9c9ff3c4be3b23861b9133bb076fc65c831d3b9d14a733790d25dd14cecb,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1717417604379198722,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-vq6sn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d4773645-1a91-48bc-a27e-61822e3eb944,},Annotations:map[string]string{io.kubernetes.container.hash: 7c46e196,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d13bd5e73c30ae4165f06b2185319aeacf35e7ffaa0b56363eb04137f5f6968,PodSandboxId:b07a28e9eef859d799892174459ac6d06f7f796f90c1d805586d8d4438fd0f2d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1717417585093806521,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rl49z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 40f21b83-9dbc-4bc9-b23d-5c8c1aa04d70,},Annotations:map[string]string{io.kubernetes.container.hash: 7e9d6aae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78062314a87041d03dc6e5e7132662b0ff33b7d83c2f19e08843de0216f60c0f,PodSandboxId:52827ed278e4e8c5717896922ec331ff8f986d672d90a707092e16fb2a1356a5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1717417584963806225,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h7kn8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 676ead8d-a891-4dac-8cc5-992c426fcdc9,},Annotations:map[string]string{io.kubernetes.container.hash: d1d99a3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08062fd585905d420205076b1d3f748399b18214282181c986eb4e0dcdcb686f,PodSandboxId:3c836ea529a741dc02ac68dcdc31ac8e0d959d76d24d1fa21e9a52f7f15d92d9,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1717417579443576098,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-th7qj,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: cb66a0b3-53cb-493e-8010-d545cc1dc5b8,},Annotations:map[string]string{io.kubernetes.container.hash: 32e0c41d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff24eb8563b0cdf092e0477748bf3b7abddc2958ee31f2fce5b90d2987e09ab0,PodSandboxId:b76bfaf676bbeb8b006b7032d5ac92a16463266e80b93e0e52cda1863a458b9c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd
96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1717417569721394616,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-2trqm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1f4740f5-01f4-413e-8b79-311c67526d69,},Annotations:map[string]string{io.kubernetes.container.hash: df90b885,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071b33296d63e35493453ab9868ec545daa42d19a1436dbc8c4e22d7983162fa,PodSandboxId:c808b7e546b606d9b24586ce1db0e971c4e35cf5bc1eae84ffb5fa24b44cfbf6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-ser
ver/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1717417565412297941,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-pl8qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f4580a-9514-47c0-aa22-11c454eaca32,},Annotations:map[string]string{io.kubernetes.container.hash: 382214a7,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:435447885e6a3c602fae7bd77b056971c54bbc1ada0aa5e9e9f634db78fc7c0a,PodSandboxId:543fde334d4b530434d593b1fb43a32cd0a
a6dd937131e82b4db8d5f79083144,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_EXITED,CreatedAt:1717417539747054220,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21a1c096-2479-4d10-864a-8b202b08a284,},Annotations:map[string]string{io.kubernetes.container.hash: 409d8265,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:17a9104d810266c5a8079eeaf8d0c23a2e4538617523b6b90bff538c0454bd06,PodSandboxId:81961c6a37d61c8b612f41f7f942f2b0a4c108ba966128c4208ecab42f3fe95c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717417533319293443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d92bc5-3f10-47e3-84a9-f532f14deae4,},Annotations:map[string]string{io.kubernetes.container.hash: ed5f337c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3,PodSandboxId:0bfe8f416027409f7e1eac5af8acd40c936317be61611534f1284947fa2ef9f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717417526164116856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hmhdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cfe166-99f3-4ac9-9905-8be76bcb511d,},Annotations:map[string]string{io.kubernetes.container.hash: b559e7d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e,PodSandboxId:bdb166637cc76b778ba00bf3d396efbe8bd2978f9e621874b1bb0fb2220aff46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717417523013109940,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6ssr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 609d1553-86b5-46ea-b503-bdfd9f291571,},Annotations:map[string]string{io.kubernetes.container.hash: dde3c0ec,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514,PodSandboxId:b7cc010079adda6957a2caae2c510628a876664b8dca66867c8c5a8f08ddc1c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717417503291175049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628fee2574e8e2e94faacdc70733c8af,},Annotations:map[string]string{io.kubernetes.container.hash: eee46468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8,PodSandboxId:dfa5c4cb4bc79b6324610ebb8427a1121ff9130a3754c8522018aadb5bc2e443,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717417503268941281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65e7e1f6b5fb520ef619dd246fd97035,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4,PodSandboxId:96186e4c50e5eb41ec7257ba2b4ec8474fc6064c8a935839054727baa8b306a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717417503266848986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c2e93774694bec0d9f39543e1c101b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfe1b2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b,PodSandboxId:0b594b2f837fe04b34ea1200fca819f9b4bc408fed28f0e293849d18e3e2d779,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717417503202321229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e264d67def89fa6266f980f6f77444,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ded845f4-ad6b-4e79-8933-a2af527044f3 name=/runtime.v1.RuntimeService/ListContainers
Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.645864445Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e023e47a-ce10-4ded-a64a-ae6a2d3bcb26 name=/runtime.v1.RuntimeService/Version
Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.645955534Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e023e47a-ce10-4ded-a64a-ae6a2d3bcb26 name=/runtime.v1.RuntimeService/Version
Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.647182216Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=23f6cfba-eaea-4c30-b7ac-5db27e7dd1e1 name=/runtime.v1.ImageService/ImageFsInfo
Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.648442087Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717417833648415709,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:584738,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23f6cfba-eaea-4c30-b7ac-5db27e7dd1e1 name=/runtime.v1.ImageService/ImageFsInfo
Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.649011776Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0adb0d59-0b1d-4b86-b2aa-3f6e75a8abb0 name=/runtime.v1.RuntimeService/ListContainers
Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.649103464Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0adb0d59-0b1d-4b86-b2aa-3f6e75a8abb0 name=/runtime.v1.RuntimeService/ListContainers
Jun 03 12:30:33 addons-699562 crio[679]: time="2024-06-03 12:30:33.649524921Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8d642bc3271163ac07b67d5d065df03b7f8ec966349c684f21b1bd59704f0e69,PodSandboxId:2cd7a7a28e0a544856b8d5555606d91ef93f12a0d206dd883ea24422a0b3358b,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1717417824993167174,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-86c47465fc-79c22,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 084158b3-1687-4f4c-b741-cbab7ca11858,},Annotations:map[string]string{io.kubernetes.container.hash: 29cd3655,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a173411215156f8b18d5fbf4880f8e8fdde156ec2a9e410913aa0c571553461a,PodSandboxId:f58876a06d48db0d426b93948a20a6d66b18f312f72772da56715a105e6fb466,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:70ea0d8cc5300acde42073a2fbc0d28964ddb6e3c31263d92589c2320c3ccba4,State:CONTAINER_RUNNING,CreatedAt:1717417684144009266,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 22eac9e0-47f1-46a1-9745-87ca515de64e,},Annotations:map[string]string{io.kubern
etes.container.hash: d250beef,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bf5932194780bd3cb97f9edc1a375d5a81fda4f3a462fe7b477ade5bb3d2ef1,PodSandboxId:83a0e5827ce1a87f9a28b80c3e8aef138aa1aafbe0be947094b5660af09e3673,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bd42824d488ce58074f6d54cb051437d0dc2669f3f96a4d9b3b72a8d7ddda679,State:CONTAINER_RUNNING,CreatedAt:1717417672655915191,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-68456f997b-tpgtj,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.
uid: c02f3cb7-dd75-4d83-89fe-082ca6c80805,},Annotations:map[string]string{io.kubernetes.container.hash: 22246b3d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f787a95dc6ea2d78d819bc9e3ce31d217271f40af9c989319ddc466faa542c4,PodSandboxId:266b9c9ff3c4be3b23861b9133bb076fc65c831d3b9d14a733790d25dd14cecb,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1717417604379198722,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-5db96cd9b4-vq6sn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: d4773645-1a91-48bc-a27e-61822e3eb944,},Annotations:map[string]string{io.kubernetes.container.hash: 7c46e196,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d13bd5e73c30ae4165f06b2185319aeacf35e7ffaa0b56363eb04137f5f6968,PodSandboxId:b07a28e9eef859d799892174459ac6d06f7f796f90c1d805586d8d4438fd0f2d,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTA
INER_EXITED,CreatedAt:1717417585093806521,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-rl49z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 40f21b83-9dbc-4bc9-b23d-5c8c1aa04d70,},Annotations:map[string]string{io.kubernetes.container.hash: 7e9d6aae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78062314a87041d03dc6e5e7132662b0ff33b7d83c2f19e08843de0216f60c0f,PodSandboxId:52827ed278e4e8c5717896922ec331ff8f986d672d90a707092e16fb2a1356a5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f
6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1717417584963806225,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h7kn8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 676ead8d-a891-4dac-8cc5-992c426fcdc9,},Annotations:map[string]string{io.kubernetes.container.hash: d1d99a3a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08062fd585905d420205076b1d3f748399b18214282181c986eb4e0dcdcb686f,PodSandboxId:3c836ea529a741dc02ac68dcdc31ac8e0d959d76d24d1fa21e9a52f7f15d92d9,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1717417579443576098,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-5ddbf7d777-th7qj,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: cb66a0b3-53cb-493e-8010-d545cc1dc5b8,},Annotations:map[string]string{io.kubernetes.container.hash: 32e0c41d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff24eb8563b0cdf092e0477748bf3b7abddc2958ee31f2fce5b90d2987e09ab0,PodSandboxId:b76bfaf676bbeb8b006b7032d5ac92a16463266e80b93e0e52cda1863a458b9c,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd
96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1717417569721394616,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-8d985888d-2trqm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1f4740f5-01f4-413e-8b79-311c67526d69,},Annotations:map[string]string{io.kubernetes.container.hash: df90b885,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:071b33296d63e35493453ab9868ec545daa42d19a1436dbc8c4e22d7983162fa,PodSandboxId:c808b7e546b606d9b24586ce1db0e971c4e35cf5bc1eae84ffb5fa24b44cfbf6,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-ser
ver/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1717417565412297941,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-pl8qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26f4580a-9514-47c0-aa22-11c454eaca32,},Annotations:map[string]string{io.kubernetes.container.hash: 382214a7,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:435447885e6a3c602fae7bd77b056971c54bbc1ada0aa5e9e9f634db78fc7c0a,PodSandboxId:543fde334d4b530434d593b1fb43a32cd0a
a6dd937131e82b4db8d5f79083144,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_EXITED,CreatedAt:1717417539747054220,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21a1c096-2479-4d10-864a-8b202b08a284,},Annotations:map[string]string{io.kubernetes.container.hash: 409d8265,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},&Container{Id:17a9104d810266c5a8079eeaf8d0c23a2e4538617523b6b90bff538c0454bd06,PodSandboxId:81961c6a37d61c8b612f41f7f942f2b0a4c108ba966128c4208ecab42f3fe95c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717417533319293443,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3d92bc5-3f10-47e3-84a9-f532f14deae4,},Annotations:map[string]string{io.kubernetes.container.hash: ed5f337c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Co
ntainer{Id:35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3,PodSandboxId:0bfe8f416027409f7e1eac5af8acd40c936317be61611534f1284947fa2ef9f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717417526164116856,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hmhdl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3cfe166-99f3-4ac9-9905-8be76bcb511d,},Annotations:map[string]string{io.kubernetes.container.hash: b559e7d2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e,PodSandboxId:bdb166637cc76b778ba00bf3d396efbe8bd2978f9e621874b1bb0fb2220aff46,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717417523013109940,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6ssr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 609d1553-86b5-46ea-b503-bdfd9f291571,},Annotations:map[string]string{io.kubernetes.container.hash: dde3c0ec,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514,PodSandboxId:b7cc010079adda6957a2caae2c510628a876664b8dca66867c8c5a8f08ddc1c5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717417503291175049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 628fee2574e8e2e94faacdc70733c8af,},Annotations:map[string]string{io.kubernetes.container.hash: eee46468,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8,PodSandboxId:dfa5c4cb4bc79b6324610ebb8427a1121ff9130a3754c8522018aadb5bc2e443,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717417503268941281,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65e7e1f6b5fb520ef619dd246fd97035,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4,PodSandboxId:96186e4c50e5eb41ec7257ba2b4ec8474fc6064c8a935839054727baa8b306a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717417503266848986,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c2e93774694bec0d9f39543e1c101b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfe1b2c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b,PodSandboxId:0b594b2f837fe04b34ea1200fca819f9b4bc408fed28f0e293849d18e3e2d779,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717417503202321229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-699562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2e264d67def89fa6266f980f6f77444,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0adb0d59-0b1d-4b86-b2aa-3f6e75a8abb0 name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
8d642bc327116 gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7 8 seconds ago Running hello-world-app 0 2cd7a7a28e0a5 hello-world-app-86c47465fc-79c22
a173411215156 docker.io/library/nginx@sha256:059cdcc5de66cd4e588b5f416b98a7af82e75413e2bf275f1e673c5d7d4b1afa 2 minutes ago Running nginx 0 f58876a06d48d nginx
9bf5932194780 ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474 2 minutes ago Running headlamp 0 83a0e5827ce1a headlamp-68456f997b-tpgtj
8f787a95dc6ea gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b 3 minutes ago Running gcp-auth 0 266b9c9ff3c4b gcp-auth-5db96cd9b4-vq6sn
3d13bd5e73c30 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2 4 minutes ago Exited patch 0 b07a28e9eef85 ingress-nginx-admission-patch-rl49z
78062314a8704 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2 4 minutes ago Exited create 0 52827ed278e4e ingress-nginx-admission-create-h7kn8
08062fd585905 docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310 4 minutes ago Running yakd 0 3c836ea529a74 yakd-dashboard-5ddbf7d777-th7qj
ff24eb8563b0c docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef 4 minutes ago Running local-path-provisioner 0 b76bfaf676bbe local-path-provisioner-8d985888d-2trqm
071b33296d63e registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872 4 minutes ago Running metrics-server 0 c808b7e546b60 metrics-server-c59844bb4-pl8qk
435447885e6a3 gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f 4 minutes ago Exited minikube-ingress-dns 0 543fde334d4b5 kube-ingress-dns-minikube
17a9104d81026 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 5 minutes ago Running storage-provisioner 0 81961c6a37d61 storage-provisioner
35f4eaf8d81f1 cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 5 minutes ago Running coredns 0 0bfe8f4160274 coredns-7db6d8ff4d-hmhdl
6add0233edc94 747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd 5 minutes ago Running kube-proxy 0 bdb166637cc76 kube-proxy-6ssr8
0c7a1cc6df31c 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899 5 minutes ago Running etcd 0 b7cc010079add etcd-addons-699562
5dacc96e3a0d6 25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c 5 minutes ago Running kube-controller-manager 0 dfa5c4cb4bc79 kube-controller-manager-addons-699562
ff21db0353955 91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a 5 minutes ago Running kube-apiserver 0 96186e4c50e5e kube-apiserver-addons-699562
92e20bf314646 a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035 5 minutes ago Running kube-scheduler 0 0b594b2f837fe kube-scheduler-addons-699562
==> coredns [35f4eaf8d81f1547cfdacb0fd21110ec3d1f7bca90202604d57311d6c444d4e3] <==
[INFO] 10.244.0.8:53029 - 52615 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000329373s
[INFO] 10.244.0.8:53749 - 1624 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000062672s
[INFO] 10.244.0.8:53749 - 24926 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00015104s
[INFO] 10.244.0.8:58411 - 17668 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000212275s
[INFO] 10.244.0.8:58411 - 11274 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000595s
[INFO] 10.244.0.8:59239 - 53735 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00017039s
[INFO] 10.244.0.8:59239 - 37605 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000201028s
[INFO] 10.244.0.8:52190 - 44357 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000260199s
[INFO] 10.244.0.8:52190 - 54344 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000203243s
[INFO] 10.244.0.8:40017 - 29233 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000095616s
[INFO] 10.244.0.8:40017 - 50748 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00023833s
[INFO] 10.244.0.8:40407 - 532 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000075947s
[INFO] 10.244.0.8:40407 - 24106 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107276s
[INFO] 10.244.0.8:55786 - 11074 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000068658s
[INFO] 10.244.0.8:55786 - 40000 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000154175s
[INFO] 10.244.0.22:48426 - 37810 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000911895s
[INFO] 10.244.0.22:55143 - 37903 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000924721s
[INFO] 10.244.0.22:54175 - 35195 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000155679s
[INFO] 10.244.0.22:46392 - 19652 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000056593s
[INFO] 10.244.0.22:44105 - 37037 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000209248s
[INFO] 10.244.0.22:58175 - 33620 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000073278s
[INFO] 10.244.0.22:48829 - 15494 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001483557s
[INFO] 10.244.0.22:45600 - 59491 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00177612s
[INFO] 10.244.0.27:52018 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000487785s
[INFO] 10.244.0.27:44227 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000146213s
==> describe nodes <==
Name: addons-699562
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-699562
kubernetes.io/os=linux
minikube.k8s.io/commit=3e16338a2e51863cb2fad83b163378f045b3a354
minikube.k8s.io/name=addons-699562
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_06_03T12_25_09_0700
minikube.k8s.io/version=v1.33.1
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-699562
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 03 Jun 2024 12:25:05 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-699562
AcquireTime: <unset>
RenewTime: Mon, 03 Jun 2024 12:30:26 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 03 Jun 2024 12:28:43 +0000 Mon, 03 Jun 2024 12:25:03 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 03 Jun 2024 12:28:43 +0000 Mon, 03 Jun 2024 12:25:03 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 03 Jun 2024 12:28:43 +0000 Mon, 03 Jun 2024 12:25:03 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 03 Jun 2024 12:28:43 +0000 Mon, 03 Jun 2024 12:25:09 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.241
Hostname: addons-699562
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912780Ki
pods: 110
System Info:
Machine ID: 84ef91a2c9524e6487a854dd506d694c
System UUID: 84ef91a2-c952-4e64-87a8-54dd506d694c
Boot ID: af6edd86-d456-43e7-97d1-dac4dba15c8e
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.30.1
Kube-Proxy Version: v1.30.1
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (14 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default hello-world-app-86c47465fc-79c22 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 11s
default nginx 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m33s
gcp-auth gcp-auth-5db96cd9b4-vq6sn 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 4m58s
headlamp headlamp-68456f997b-tpgtj 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 2m47s
kube-system coredns-7db6d8ff4d-hmhdl 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (4%!)(MISSING) 5m11s
kube-system etcd-addons-699562 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (2%!)(MISSING) 0 (0%!)(MISSING) 5m25s
kube-system kube-apiserver-addons-699562 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m25s
kube-system kube-controller-manager-addons-699562 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m25s
kube-system kube-proxy-6ssr8 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m11s
kube-system kube-scheduler-addons-699562 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m25s
kube-system metrics-server-c59844bb4-pl8qk 100m (5%!)(MISSING) 0 (0%!)(MISSING) 200Mi (5%!)(MISSING) 0 (0%!)(MISSING) 5m5s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m5s
local-path-storage local-path-provisioner-8d985888d-2trqm 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m4s
yakd-dashboard yakd-dashboard-5ddbf7d777-th7qj 0 (0%!)(MISSING) 0 (0%!)(MISSING) 128Mi (3%!)(MISSING) 256Mi (6%!)(MISSING) 5m4s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%!)(MISSING) 0 (0%!)(MISSING)
memory 498Mi (13%!)(MISSING) 426Mi (11%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 5m10s kube-proxy
Normal NodeHasSufficientMemory 5m31s (x8 over 5m31s) kubelet Node addons-699562 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m31s (x8 over 5m31s) kubelet Node addons-699562 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m31s (x7 over 5m31s) kubelet Node addons-699562 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 5m31s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m25s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 5m25s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 5m25s kubelet Node addons-699562 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m25s kubelet Node addons-699562 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m25s kubelet Node addons-699562 status is now: NodeHasSufficientPID
Normal NodeReady 5m24s kubelet Node addons-699562 status is now: NodeReady
Normal RegisteredNode 5m12s node-controller Node addons-699562 event: Registered Node addons-699562 in Controller
==> dmesg <==
[ +4.797363] kauditd_printk_skb: 96 callbacks suppressed
[ +5.070667] kauditd_printk_skb: 100 callbacks suppressed
[ +6.543858] kauditd_printk_skb: 115 callbacks suppressed
[ +8.740436] kauditd_printk_skb: 9 callbacks suppressed
[ +6.913062] kauditd_printk_skb: 2 callbacks suppressed
[Jun 3 12:26] kauditd_printk_skb: 6 callbacks suppressed
[ +7.135899] kauditd_printk_skb: 40 callbacks suppressed
[ +11.704931] kauditd_printk_skb: 2 callbacks suppressed
[ +5.044616] kauditd_printk_skb: 10 callbacks suppressed
[ +5.006333] kauditd_printk_skb: 85 callbacks suppressed
[ +9.474385] kauditd_printk_skb: 24 callbacks suppressed
[ +5.925812] kauditd_printk_skb: 18 callbacks suppressed
[ +8.811764] kauditd_printk_skb: 24 callbacks suppressed
[Jun 3 12:27] kauditd_printk_skb: 24 callbacks suppressed
[ +5.419790] kauditd_printk_skb: 29 callbacks suppressed
[ +5.060102] kauditd_printk_skb: 35 callbacks suppressed
[Jun 3 12:28] kauditd_printk_skb: 82 callbacks suppressed
[ +6.441502] kauditd_printk_skb: 56 callbacks suppressed
[ +5.709911] kauditd_printk_skb: 2 callbacks suppressed
[ +5.585647] kauditd_printk_skb: 2 callbacks suppressed
[ +8.428593] kauditd_printk_skb: 3 callbacks suppressed
[ +15.030336] kauditd_printk_skb: 7 callbacks suppressed
[ +8.159254] kauditd_printk_skb: 33 callbacks suppressed
[Jun 3 12:30] kauditd_printk_skb: 6 callbacks suppressed
[ +6.068631] kauditd_printk_skb: 19 callbacks suppressed
==> etcd [0c7a1cc6df31c0c301fee639aa62ce868d9a11802928a59d2d19c941e0c51514] <==
{"level":"info","ts":"2024-06-03T12:26:40.969184Z","caller":"traceutil/trace.go:171","msg":"trace[875911949] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-c59844bb4-pl8qk; range_end:; response_count:1; response_revision:1164; }","duration":"172.808766ms","start":"2024-06-03T12:26:40.796368Z","end":"2024-06-03T12:26:40.969176Z","steps":["trace[875911949] 'agreement among raft nodes before linearized reading' (duration: 172.731177ms)"],"step_count":1}
{"level":"warn","ts":"2024-06-03T12:26:40.969083Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.658256ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" count_only:true ","response":"range_response_count:0 size:7"}
{"level":"info","ts":"2024-06-03T12:26:40.969383Z","caller":"traceutil/trace.go:171","msg":"trace[1638708360] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; response_count:0; response_revision:1164; }","duration":"168.010019ms","start":"2024-06-03T12:26:40.801363Z","end":"2024-06-03T12:26:40.969373Z","steps":["trace[1638708360] 'agreement among raft nodes before linearized reading' (duration: 167.606225ms)"],"step_count":1}
{"level":"warn","ts":"2024-06-03T12:26:40.969317Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"363.010904ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
{"level":"info","ts":"2024-06-03T12:26:40.969599Z","caller":"traceutil/trace.go:171","msg":"trace[494188914] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1164; }","duration":"363.307211ms","start":"2024-06-03T12:26:40.606284Z","end":"2024-06-03T12:26:40.969591Z","steps":["trace[494188914] 'agreement among raft nodes before linearized reading' (duration: 362.985664ms)"],"step_count":1}
{"level":"warn","ts":"2024-06-03T12:26:40.969676Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T12:26:40.60627Z","time spent":"363.34474ms","remote":"127.0.0.1:33146","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11475,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
{"level":"info","ts":"2024-06-03T12:26:44.2914Z","caller":"traceutil/trace.go:171","msg":"trace[27234936] linearizableReadLoop","detail":"{readStateIndex:1214; appliedIndex:1213; }","duration":"329.42933ms","start":"2024-06-03T12:26:43.961874Z","end":"2024-06-03T12:26:44.291304Z","steps":["trace[27234936] 'read index received' (duration: 329.000447ms)","trace[27234936] 'applied index is now lower than readState.Index' (duration: 428.24µs)"],"step_count":2}
{"level":"info","ts":"2024-06-03T12:26:44.291596Z","caller":"traceutil/trace.go:171","msg":"trace[1309044197] transaction","detail":"{read_only:false; response_revision:1180; number_of_response:1; }","duration":"436.33847ms","start":"2024-06-03T12:26:43.85524Z","end":"2024-06-03T12:26:44.291579Z","steps":["trace[1309044197] 'process raft request' (duration: 435.849089ms)"],"step_count":1}
{"level":"warn","ts":"2024-06-03T12:26:44.29339Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T12:26:43.855226Z","time spent":"438.045751ms","remote":"127.0.0.1:33140","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1166 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
{"level":"warn","ts":"2024-06-03T12:26:44.293718Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.692606ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11453"}
{"level":"info","ts":"2024-06-03T12:26:44.293762Z","caller":"traceutil/trace.go:171","msg":"trace[49774894] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1181; }","duration":"187.765631ms","start":"2024-06-03T12:26:44.10599Z","end":"2024-06-03T12:26:44.293756Z","steps":["trace[49774894] 'agreement among raft nodes before linearized reading' (duration: 187.507223ms)"],"step_count":1}
{"level":"info","ts":"2024-06-03T12:26:44.293953Z","caller":"traceutil/trace.go:171","msg":"trace[1996245162] transaction","detail":"{read_only:false; response_revision:1181; number_of_response:1; }","duration":"289.652518ms","start":"2024-06-03T12:26:44.004294Z","end":"2024-06-03T12:26:44.293946Z","steps":["trace[1996245162] 'process raft request' (duration: 289.152911ms)"],"step_count":1}
{"level":"warn","ts":"2024-06-03T12:26:44.291718Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"329.823247ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-06-03T12:26:44.294214Z","caller":"traceutil/trace.go:171","msg":"trace[1501958648] range","detail":"{range_begin:/registry/leases/ingress-nginx/ingress-nginx-leader; range_end:; response_count:0; response_revision:1180; }","duration":"332.361109ms","start":"2024-06-03T12:26:43.961847Z","end":"2024-06-03T12:26:44.294208Z","steps":["trace[1501958648] 'agreement among raft nodes before linearized reading' (duration: 329.829315ms)"],"step_count":1}
{"level":"warn","ts":"2024-06-03T12:26:44.294236Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T12:26:43.961834Z","time spent":"332.394927ms","remote":"127.0.0.1:33232","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":27,"request content":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" "}
{"level":"info","ts":"2024-06-03T12:28:03.350249Z","caller":"traceutil/trace.go:171","msg":"trace[112141829] linearizableReadLoop","detail":"{readStateIndex:1550; appliedIndex:1549; }","duration":"344.411339ms","start":"2024-06-03T12:28:03.005804Z","end":"2024-06-03T12:28:03.350215Z","steps":["trace[112141829] 'read index received' (duration: 344.171196ms)","trace[112141829] 'applied index is now lower than readState.Index' (duration: 239.674µs)"],"step_count":2}
{"level":"info","ts":"2024-06-03T12:28:03.350549Z","caller":"traceutil/trace.go:171","msg":"trace[525061229] transaction","detail":"{read_only:false; response_revision:1493; number_of_response:1; }","duration":"407.785729ms","start":"2024-06-03T12:28:02.942753Z","end":"2024-06-03T12:28:03.350539Z","steps":["trace[525061229] 'process raft request' (duration: 407.359787ms)"],"step_count":1}
{"level":"warn","ts":"2024-06-03T12:28:03.350779Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T12:28:02.942735Z","time spent":"407.849445ms","remote":"127.0.0.1:33146","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4125,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/registry-proxy-n8265\" mod_revision:1490 > success:<request_put:<key:\"/registry/pods/kube-system/registry-proxy-n8265\" value_size:4070 >> failure:<request_range:<key:\"/registry/pods/kube-system/registry-proxy-n8265\" > >"}
{"level":"warn","ts":"2024-06-03T12:28:03.350953Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"345.148304ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
{"level":"info","ts":"2024-06-03T12:28:03.350975Z","caller":"traceutil/trace.go:171","msg":"trace[166667215] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1493; }","duration":"345.228477ms","start":"2024-06-03T12:28:03.00574Z","end":"2024-06-03T12:28:03.350969Z","steps":["trace[166667215] 'agreement among raft nodes before linearized reading' (duration: 345.150589ms)"],"step_count":1}
{"level":"warn","ts":"2024-06-03T12:28:03.35099Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T12:28:03.005727Z","time spent":"345.260644ms","remote":"127.0.0.1:33232","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":521,"request content":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" "}
{"level":"warn","ts":"2024-06-03T12:28:03.351095Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.791048ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6051"}
{"level":"info","ts":"2024-06-03T12:28:03.351136Z","caller":"traceutil/trace.go:171","msg":"trace[2033513505] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1493; }","duration":"223.850857ms","start":"2024-06-03T12:28:03.12728Z","end":"2024-06-03T12:28:03.35113Z","steps":["trace[2033513505] 'agreement among raft nodes before linearized reading' (duration: 223.776833ms)"],"step_count":1}
{"level":"info","ts":"2024-06-03T12:28:19.240319Z","caller":"traceutil/trace.go:171","msg":"trace[1745785475] transaction","detail":"{read_only:false; response_revision:1593; number_of_response:1; }","duration":"377.328726ms","start":"2024-06-03T12:28:18.862975Z","end":"2024-06-03T12:28:19.240304Z","steps":["trace[1745785475] 'process raft request' (duration: 377.221796ms)"],"step_count":1}
{"level":"warn","ts":"2024-06-03T12:28:19.240445Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-06-03T12:28:18.862954Z","time spent":"377.437463ms","remote":"127.0.0.1:33140","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1589 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
==> gcp-auth [8f787a95dc6ea2d78d819bc9e3ce31d217271f40af9c989319ddc466faa542c4] <==
2024/06/03 12:26:44 GCP Auth Webhook started!
2024/06/03 12:27:45 Ready to marshal response ...
2024/06/03 12:27:45 Ready to write response ...
2024/06/03 12:27:45 Ready to marshal response ...
2024/06/03 12:27:45 Ready to write response ...
2024/06/03 12:27:46 Ready to marshal response ...
2024/06/03 12:27:46 Ready to write response ...
2024/06/03 12:27:46 Ready to marshal response ...
2024/06/03 12:27:46 Ready to write response ...
2024/06/03 12:27:46 Ready to marshal response ...
2024/06/03 12:27:46 Ready to write response ...
2024/06/03 12:27:51 Ready to marshal response ...
2024/06/03 12:27:51 Ready to write response ...
2024/06/03 12:27:57 Ready to marshal response ...
2024/06/03 12:27:57 Ready to write response ...
2024/06/03 12:27:58 Ready to marshal response ...
2024/06/03 12:27:58 Ready to write response ...
2024/06/03 12:28:00 Ready to marshal response ...
2024/06/03 12:28:00 Ready to write response ...
2024/06/03 12:28:13 Ready to marshal response ...
2024/06/03 12:28:13 Ready to write response ...
2024/06/03 12:28:35 Ready to marshal response ...
2024/06/03 12:28:35 Ready to write response ...
2024/06/03 12:30:22 Ready to marshal response ...
2024/06/03 12:30:22 Ready to write response ...
==> kernel <==
12:30:34 up 6 min, 0 users, load average: 0.34, 1.11, 0.63
Linux addons-699562 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [ff21db0353955ca8d02785382a653b0d945e75dbc15d6056da1fd05b0f72f2c4] <==
E0603 12:27:09.544611 1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.164.223:443/apis/metrics.k8s.io/v1beta1: Get "https://10.111.164.223:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.111.164.223:443: connect: connection refused
I0603 12:27:09.607112 1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
I0603 12:27:46.582529 1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.91.232"}
I0603 12:27:59.941066 1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
I0603 12:28:00.119538 1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.213.114"}
I0603 12:28:04.555445 1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
E0603 12:28:05.573603 1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"gadget\" not found]"
E0603 12:28:05.580561 1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"gadget\" not found]"
W0603 12:28:05.588138 1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
I0603 12:28:26.681238 1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I0603 12:28:50.936173 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0603 12:28:50.936287 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0603 12:28:50.965267 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0603 12:28:50.965333 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0603 12:28:50.975534 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0603 12:28:50.975583 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0603 12:28:50.987547 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0603 12:28:50.987604 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I0603 12:28:51.026275 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I0603 12:28:51.029483 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W0603 12:28:51.975974 1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W0603 12:28:52.027110 1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W0603 12:28:52.037480 1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
I0603 12:30:22.605713 1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.62.100"}
E0603 12:30:25.751190 1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
==> kube-controller-manager [5dacc96e3a0d65c427ed393f49dce81b0d6838d85460005e3bfacb21d51161e8] <==
W0603 12:29:17.014441 1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0603 12:29:17.014553 1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0603 12:29:27.425835 1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0603 12:29:27.425985 1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0603 12:29:30.412418 1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0603 12:29:30.412518 1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0603 12:29:30.812381 1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0603 12:29:30.812479 1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0603 12:30:04.128267 1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0603 12:30:04.128694 1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0603 12:30:04.205594 1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0603 12:30:04.205775 1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0603 12:30:07.142462 1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0603 12:30:07.142560 1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0603 12:30:07.589170 1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0603 12:30:07.589205 1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0603 12:30:22.438797 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="40.583646ms"
I0603 12:30:22.461785 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="22.885543ms"
I0603 12:30:22.461869 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="36.813µs"
I0603 12:30:22.468989 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="24.42µs"
I0603 12:30:25.651086 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="3.624µs"
I0603 12:30:25.654400 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
I0603 12:30:25.681077 1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
I0603 12:30:26.051406 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="9.979993ms"
I0603 12:30:26.052358 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="79.584µs"
==> kube-proxy [6add0233edc943014e1d0cd253c4b3e434922141b9116389f4d7c00c4fb8f74e] <==
I0603 12:25:23.585524 1 server_linux.go:69] "Using iptables proxy"
I0603 12:25:23.608160 1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.241"]
I0603 12:25:23.755662 1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
I0603 12:25:23.755712 1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I0603 12:25:23.755727 1 server_linux.go:165] "Using iptables Proxier"
I0603 12:25:23.759852 1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0603 12:25:23.760062 1 server.go:872] "Version info" version="v1.30.1"
I0603 12:25:23.760076 1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0603 12:25:23.764482 1 config.go:192] "Starting service config controller"
I0603 12:25:23.764500 1 shared_informer.go:313] Waiting for caches to sync for service config
I0603 12:25:23.764539 1 config.go:101] "Starting endpoint slice config controller"
I0603 12:25:23.764543 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I0603 12:25:23.766909 1 config.go:319] "Starting node config controller"
I0603 12:25:23.766944 1 shared_informer.go:313] Waiting for caches to sync for node config
I0603 12:25:23.865029 1 shared_informer.go:320] Caches are synced for endpoint slice config
I0603 12:25:23.865071 1 shared_informer.go:320] Caches are synced for service config
I0603 12:25:23.867400 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [92e20bf3146469708eb022f97afa4e87de0863e9fc6584f1c33207af6410891b] <==
W0603 12:25:05.882911 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0603 12:25:05.883016 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0603 12:25:05.883736 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0603 12:25:05.884966 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0603 12:25:05.884245 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0603 12:25:05.884352 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0603 12:25:05.884720 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0603 12:25:05.884860 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0603 12:25:06.694024 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0603 12:25:06.694053 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
W0603 12:25:06.778297 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0603 12:25:06.778385 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
W0603 12:25:06.850846 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0603 12:25:06.850894 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
W0603 12:25:06.890880 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E0603 12:25:06.891766 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
W0603 12:25:06.929739 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0603 12:25:06.929827 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
W0603 12:25:06.932321 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0603 12:25:06.932367 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
W0603 12:25:07.026054 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E0603 12:25:07.026211 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
W0603 12:25:07.199563 1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0603 12:25:07.199751 1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0603 12:25:09.962396 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Jun 03 12:30:22 addons-699562 kubelet[1268]: I0603 12:30:22.444619 1268 memory_manager.go:354] "RemoveStaleState removing state" podUID="db932b0d-726d-4b8d-b47c-dcbc1657a70d" containerName="node-driver-registrar"
Jun 03 12:30:22 addons-699562 kubelet[1268]: I0603 12:30:22.444708 1268 memory_manager.go:354] "RemoveStaleState removing state" podUID="d102455a-acf1-4067-b512-3e7d24676733" containerName="csi-resizer"
Jun 03 12:30:22 addons-699562 kubelet[1268]: I0603 12:30:22.444714 1268 memory_manager.go:354] "RemoveStaleState removing state" podUID="db932b0d-726d-4b8d-b47c-dcbc1657a70d" containerName="csi-provisioner"
Jun 03 12:30:22 addons-699562 kubelet[1268]: I0603 12:30:22.444719 1268 memory_manager.go:354] "RemoveStaleState removing state" podUID="db932b0d-726d-4b8d-b47c-dcbc1657a70d" containerName="csi-snapshotter"
Jun 03 12:30:22 addons-699562 kubelet[1268]: I0603 12:30:22.483130 1268 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/084158b3-1687-4f4c-b741-cbab7ca11858-gcp-creds\") pod \"hello-world-app-86c47465fc-79c22\" (UID: \"084158b3-1687-4f4c-b741-cbab7ca11858\") " pod="default/hello-world-app-86c47465fc-79c22"
Jun 03 12:30:22 addons-699562 kubelet[1268]: I0603 12:30:22.483186 1268 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jrhz\" (UniqueName: \"kubernetes.io/projected/084158b3-1687-4f4c-b741-cbab7ca11858-kube-api-access-8jrhz\") pod \"hello-world-app-86c47465fc-79c22\" (UID: \"084158b3-1687-4f4c-b741-cbab7ca11858\") " pod="default/hello-world-app-86c47465fc-79c22"
Jun 03 12:30:24 addons-699562 kubelet[1268]: I0603 12:30:24.014181 1268 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="543fde334d4b530434d593b1fb43a32cd0aa6dd937131e82b4db8d5f79083144"
Jun 03 12:30:24 addons-699562 kubelet[1268]: I0603 12:30:24.294970 1268 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlv79\" (UniqueName: \"kubernetes.io/projected/21a1c096-2479-4d10-864a-8b202b08a284-kube-api-access-wlv79\") pod \"21a1c096-2479-4d10-864a-8b202b08a284\" (UID: \"21a1c096-2479-4d10-864a-8b202b08a284\") "
Jun 03 12:30:24 addons-699562 kubelet[1268]: I0603 12:30:24.308951 1268 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21a1c096-2479-4d10-864a-8b202b08a284-kube-api-access-wlv79" (OuterVolumeSpecName: "kube-api-access-wlv79") pod "21a1c096-2479-4d10-864a-8b202b08a284" (UID: "21a1c096-2479-4d10-864a-8b202b08a284"). InnerVolumeSpecName "kube-api-access-wlv79". PluginName "kubernetes.io/projected", VolumeGidValue ""
Jun 03 12:30:24 addons-699562 kubelet[1268]: I0603 12:30:24.396390 1268 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wlv79\" (UniqueName: \"kubernetes.io/projected/21a1c096-2479-4d10-864a-8b202b08a284-kube-api-access-wlv79\") on node \"addons-699562\" DevicePath \"\""
Jun 03 12:30:26 addons-699562 kubelet[1268]: I0603 12:30:26.040372 1268 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-86c47465fc-79c22" podStartSLOduration=2.1337134349999998 podStartE2EDuration="4.040342322s" podCreationTimestamp="2024-06-03 12:30:22 +0000 UTC" firstStartedPulling="2024-06-03 12:30:23.061899927 +0000 UTC m=+314.684732158" lastFinishedPulling="2024-06-03 12:30:24.968528811 +0000 UTC m=+316.591361045" observedRunningTime="2024-06-03 12:30:26.039989496 +0000 UTC m=+317.662821745" watchObservedRunningTime="2024-06-03 12:30:26.040342322 +0000 UTC m=+317.663174573"
Jun 03 12:30:26 addons-699562 kubelet[1268]: I0603 12:30:26.549105 1268 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21a1c096-2479-4d10-864a-8b202b08a284" path="/var/lib/kubelet/pods/21a1c096-2479-4d10-864a-8b202b08a284/volumes"
Jun 03 12:30:26 addons-699562 kubelet[1268]: I0603 12:30:26.549498 1268 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40f21b83-9dbc-4bc9-b23d-5c8c1aa04d70" path="/var/lib/kubelet/pods/40f21b83-9dbc-4bc9-b23d-5c8c1aa04d70/volumes"
Jun 03 12:30:26 addons-699562 kubelet[1268]: I0603 12:30:26.549950 1268 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="676ead8d-a891-4dac-8cc5-992c426fcdc9" path="/var/lib/kubelet/pods/676ead8d-a891-4dac-8cc5-992c426fcdc9/volumes"
Jun 03 12:30:28 addons-699562 kubelet[1268]: I0603 12:30:28.933838 1268 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/748f7279-00fd-4d10-aa57-2f4c60258fe2-webhook-cert\") pod \"748f7279-00fd-4d10-aa57-2f4c60258fe2\" (UID: \"748f7279-00fd-4d10-aa57-2f4c60258fe2\") "
Jun 03 12:30:28 addons-699562 kubelet[1268]: I0603 12:30:28.933879 1268 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh86b\" (UniqueName: \"kubernetes.io/projected/748f7279-00fd-4d10-aa57-2f4c60258fe2-kube-api-access-vh86b\") pod \"748f7279-00fd-4d10-aa57-2f4c60258fe2\" (UID: \"748f7279-00fd-4d10-aa57-2f4c60258fe2\") "
Jun 03 12:30:28 addons-699562 kubelet[1268]: I0603 12:30:28.936167 1268 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/748f7279-00fd-4d10-aa57-2f4c60258fe2-kube-api-access-vh86b" (OuterVolumeSpecName: "kube-api-access-vh86b") pod "748f7279-00fd-4d10-aa57-2f4c60258fe2" (UID: "748f7279-00fd-4d10-aa57-2f4c60258fe2"). InnerVolumeSpecName "kube-api-access-vh86b". PluginName "kubernetes.io/projected", VolumeGidValue ""
Jun 03 12:30:28 addons-699562 kubelet[1268]: I0603 12:30:28.937567 1268 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/748f7279-00fd-4d10-aa57-2f4c60258fe2-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "748f7279-00fd-4d10-aa57-2f4c60258fe2" (UID: "748f7279-00fd-4d10-aa57-2f4c60258fe2"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
Jun 03 12:30:29 addons-699562 kubelet[1268]: I0603 12:30:29.034524 1268 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/748f7279-00fd-4d10-aa57-2f4c60258fe2-webhook-cert\") on node \"addons-699562\" DevicePath \"\""
Jun 03 12:30:29 addons-699562 kubelet[1268]: I0603 12:30:29.034557 1268 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vh86b\" (UniqueName: \"kubernetes.io/projected/748f7279-00fd-4d10-aa57-2f4c60258fe2-kube-api-access-vh86b\") on node \"addons-699562\" DevicePath \"\""
Jun 03 12:30:29 addons-699562 kubelet[1268]: I0603 12:30:29.045918 1268 scope.go:117] "RemoveContainer" containerID="77d76a053b1fc4548fe88ede699aaf238870e27444a32c29242f5f6d0b76f40c"
Jun 03 12:30:29 addons-699562 kubelet[1268]: I0603 12:30:29.071057 1268 scope.go:117] "RemoveContainer" containerID="77d76a053b1fc4548fe88ede699aaf238870e27444a32c29242f5f6d0b76f40c"
Jun 03 12:30:29 addons-699562 kubelet[1268]: E0603 12:30:29.071800 1268 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77d76a053b1fc4548fe88ede699aaf238870e27444a32c29242f5f6d0b76f40c\": container with ID starting with 77d76a053b1fc4548fe88ede699aaf238870e27444a32c29242f5f6d0b76f40c not found: ID does not exist" containerID="77d76a053b1fc4548fe88ede699aaf238870e27444a32c29242f5f6d0b76f40c"
Jun 03 12:30:29 addons-699562 kubelet[1268]: I0603 12:30:29.071849 1268 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77d76a053b1fc4548fe88ede699aaf238870e27444a32c29242f5f6d0b76f40c"} err="failed to get container status \"77d76a053b1fc4548fe88ede699aaf238870e27444a32c29242f5f6d0b76f40c\": rpc error: code = NotFound desc = could not find container \"77d76a053b1fc4548fe88ede699aaf238870e27444a32c29242f5f6d0b76f40c\": container with ID starting with 77d76a053b1fc4548fe88ede699aaf238870e27444a32c29242f5f6d0b76f40c not found: ID does not exist"
Jun 03 12:30:30 addons-699562 kubelet[1268]: I0603 12:30:30.556191 1268 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="748f7279-00fd-4d10-aa57-2f4c60258fe2" path="/var/lib/kubelet/pods/748f7279-00fd-4d10-aa57-2f4c60258fe2/volumes"
==> storage-provisioner [17a9104d810266c5a8079eeaf8d0c23a2e4538617523b6b90bff538c0454bd06] <==
I0603 12:25:34.289349 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0603 12:25:34.302711 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0603 12:25:34.302770 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0603 12:25:34.321137 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0603 12:25:34.321265 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-699562_54717765-0edd-48a4-aaa9-cc3e6be606f3!
I0603 12:25:34.323378 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6b63e30e-da74-48d8-b9d7-4d6f0eeb01ad", APIVersion:"v1", ResourceVersion:"827", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-699562_54717765-0edd-48a4-aaa9-cc3e6be606f3 became leader
I0603 12:25:34.422282 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-699562_54717765-0edd-48a4-aaa9-cc3e6be606f3!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-699562 -n addons-699562
helpers_test.go:261: (dbg) Run: kubectl --context addons-699562 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.09s)