=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run: kubectl --context addons-618388 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run: kubectl --context addons-618388 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run: kubectl --context addons-618388 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [004073d4-980e-4fd9-ad94-dc4598f84218] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [004073d4-980e-4fd9-ad94-dc4598f84218] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004517061s
I1216 19:38:00.184877 14254 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run: out/minikube-linux-amd64 -p addons-618388 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-618388 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.029882465s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run: kubectl --context addons-618388 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run: out/minikube-linux-amd64 -p addons-618388 ip
addons_test.go:297: (dbg) Run: nslookup hello-john.test 192.168.39.82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-618388 -n addons-618388
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p addons-618388 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-618388 logs -n 25: (1.534637276s)
helpers_test.go:252: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| delete | -p download-only-654038 | download-only-654038 | jenkins | v1.34.0 | 16 Dec 24 19:34 UTC | 16 Dec 24 19:34 UTC |
| delete | -p download-only-646102 | download-only-646102 | jenkins | v1.34.0 | 16 Dec 24 19:34 UTC | 16 Dec 24 19:34 UTC |
| delete | -p download-only-654038 | download-only-654038 | jenkins | v1.34.0 | 16 Dec 24 19:34 UTC | 16 Dec 24 19:34 UTC |
| start | --download-only -p | binary-mirror-010223 | jenkins | v1.34.0 | 16 Dec 24 19:34 UTC | |
| | binary-mirror-010223 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:42673 | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| delete | -p binary-mirror-010223 | binary-mirror-010223 | jenkins | v1.34.0 | 16 Dec 24 19:34 UTC | 16 Dec 24 19:34 UTC |
| addons | enable dashboard -p | addons-618388 | jenkins | v1.34.0 | 16 Dec 24 19:34 UTC | |
| | addons-618388 | | | | | |
| addons | disable dashboard -p | addons-618388 | jenkins | v1.34.0 | 16 Dec 24 19:34 UTC | |
| | addons-618388 | | | | | |
| start | -p addons-618388 --wait=true | addons-618388 | jenkins | v1.34.0 | 16 Dec 24 19:34 UTC | 16 Dec 24 19:37 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --addons=amd-gpu-device-plugin | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| addons | addons-618388 addons disable | addons-618388 | jenkins | v1.34.0 | 16 Dec 24 19:37 UTC | 16 Dec 24 19:37 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| addons | addons-618388 addons disable | addons-618388 | jenkins | v1.34.0 | 16 Dec 24 19:37 UTC | 16 Dec 24 19:37 UTC |
| | gcp-auth --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | enable headlamp | addons-618388 | jenkins | v1.34.0 | 16 Dec 24 19:37 UTC | 16 Dec 24 19:37 UTC |
| | -p addons-618388 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-618388 addons disable | addons-618388 | jenkins | v1.34.0 | 16 Dec 24 19:37 UTC | 16 Dec 24 19:37 UTC |
| | yakd --alsologtostderr -v=1 | | | | | |
| addons | addons-618388 addons | addons-618388 | jenkins | v1.34.0 | 16 Dec 24 19:37 UTC | 16 Dec 24 19:37 UTC |
| | disable nvidia-device-plugin | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-618388 addons disable | addons-618388 | jenkins | v1.34.0 | 16 Dec 24 19:37 UTC | 16 Dec 24 19:37 UTC |
| | headlamp --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| ip | addons-618388 ip | addons-618388 | jenkins | v1.34.0 | 16 Dec 24 19:37 UTC | 16 Dec 24 19:37 UTC |
| addons | addons-618388 addons | addons-618388 | jenkins | v1.34.0 | 16 Dec 24 19:37 UTC | 16 Dec 24 19:37 UTC |
| | disable metrics-server | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-618388 addons disable | addons-618388 | jenkins | v1.34.0 | 16 Dec 24 19:37 UTC | 16 Dec 24 19:37 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | addons-618388 addons | addons-618388 | jenkins | v1.34.0 | 16 Dec 24 19:37 UTC | 16 Dec 24 19:38 UTC |
| | disable inspektor-gadget | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-618388 addons | addons-618388 | jenkins | v1.34.0 | 16 Dec 24 19:37 UTC | 16 Dec 24 19:37 UTC |
| | disable cloud-spanner | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ssh | addons-618388 ssh curl -s | addons-618388 | jenkins | v1.34.0 | 16 Dec 24 19:38 UTC | |
| | http://127.0.0.1/ -H 'Host: | | | | | |
| | nginx.example.com' | | | | | |
| ssh | addons-618388 ssh cat | addons-618388 | jenkins | v1.34.0 | 16 Dec 24 19:38 UTC | 16 Dec 24 19:38 UTC |
| | /opt/local-path-provisioner/pvc-4e008b7b-de06-41f9-8097-3d4fc784c52a_default_test-pvc/file1 | | | | | |
| addons | addons-618388 addons disable | addons-618388 | jenkins | v1.34.0 | 16 Dec 24 19:38 UTC | 16 Dec 24 19:38 UTC |
| | storage-provisioner-rancher | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-618388 addons | addons-618388 | jenkins | v1.34.0 | 16 Dec 24 19:38 UTC | 16 Dec 24 19:38 UTC |
| | disable volumesnapshots | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-618388 addons | addons-618388 | jenkins | v1.34.0 | 16 Dec 24 19:38 UTC | 16 Dec 24 19:38 UTC |
| | disable csi-hostpath-driver | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ip | addons-618388 ip | addons-618388 | jenkins | v1.34.0 | 16 Dec 24 19:40 UTC | 16 Dec 24 19:40 UTC |
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/12/16 19:34:56
Running on machine: ubuntu-20-agent-15
Binary: Built with gc go1.23.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1216 19:34:56.870008 14891 out.go:345] Setting OutFile to fd 1 ...
I1216 19:34:56.870269 14891 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 19:34:56.870284 14891 out.go:358] Setting ErrFile to fd 2...
I1216 19:34:56.870292 14891 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 19:34:56.870503 14891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20091-7083/.minikube/bin
I1216 19:34:56.871277 14891 out.go:352] Setting JSON to false
I1216 19:34:56.872245 14891 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1042,"bootTime":1734376655,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1216 19:34:56.872367 14891 start.go:139] virtualization: kvm guest
I1216 19:34:56.874566 14891 out.go:177] * [addons-618388] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
I1216 19:34:56.875833 14891 notify.go:220] Checking for updates...
I1216 19:34:56.875844 14891 out.go:177] - MINIKUBE_LOCATION=20091
I1216 19:34:56.877154 14891 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1216 19:34:56.878487 14891 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20091-7083/kubeconfig
I1216 19:34:56.879708 14891 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20091-7083/.minikube
I1216 19:34:56.881065 14891 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I1216 19:34:56.882340 14891 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1216 19:34:56.883827 14891 driver.go:394] Setting default libvirt URI to qemu:///system
I1216 19:34:56.916064 14891 out.go:177] * Using the kvm2 driver based on user configuration
I1216 19:34:56.917375 14891 start.go:297] selected driver: kvm2
I1216 19:34:56.917401 14891 start.go:901] validating driver "kvm2" against <nil>
I1216 19:34:56.917418 14891 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1216 19:34:56.918454 14891 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1216 19:34:56.918658 14891 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20091-7083/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1216 19:34:56.933253 14891 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
I1216 19:34:56.933299 14891 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I1216 19:34:56.933529 14891 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1216 19:34:56.933557 14891 cni.go:84] Creating CNI manager for ""
I1216 19:34:56.933595 14891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1216 19:34:56.933602 14891 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1216 19:34:56.933645 14891 start.go:340] cluster config:
{Name:addons-618388 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:addons-618388 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
I1216 19:34:56.933723 14891 iso.go:125] acquiring lock: {Name:mk60ed2ba7ed00047edacd09f4f6bf84214f0831 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1216 19:34:56.935487 14891 out.go:177] * Starting "addons-618388" primary control-plane node in "addons-618388" cluster
I1216 19:34:56.936818 14891 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
I1216 19:34:56.936857 14891 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
I1216 19:34:56.936865 14891 cache.go:56] Caching tarball of preloaded images
I1216 19:34:56.936936 14891 preload.go:172] Found /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1216 19:34:56.936947 14891 cache.go:59] Finished verifying existence of preloaded tar for v1.32.0 on crio
I1216 19:34:56.937260 14891 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/config.json ...
I1216 19:34:56.937284 14891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/config.json: {Name:mk1d5f6df4bb14319daf632ba585b1ab53139758 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 19:34:56.937413 14891 start.go:360] acquireMachinesLock for addons-618388: {Name:mk014ce1133f8d018fee1f78c9c31a354da6dd77 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1216 19:34:56.937457 14891 start.go:364] duration metric: took 30.667µs to acquireMachinesLock for "addons-618388"
I1216 19:34:56.937473 14891 start.go:93] Provisioning new machine with config: &{Name:addons-618388 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.32.0 ClusterName:addons-618388 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
I1216 19:34:56.937527 14891 start.go:125] createHost starting for "" (driver="kvm2")
I1216 19:34:56.939314 14891 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
I1216 19:34:56.939518 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:34:56.939573 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:34:56.953735 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33843
I1216 19:34:56.954175 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:34:56.954796 14891 main.go:141] libmachine: Using API Version 1
I1216 19:34:56.954811 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:34:56.955129 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:34:56.955313 14891 main.go:141] libmachine: (addons-618388) Calling .GetMachineName
I1216 19:34:56.955485 14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
I1216 19:34:56.955641 14891 start.go:159] libmachine.API.Create for "addons-618388" (driver="kvm2")
I1216 19:34:56.955679 14891 client.go:168] LocalClient.Create starting
I1216 19:34:56.955722 14891 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem
I1216 19:34:57.176207 14891 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem
I1216 19:34:57.439058 14891 main.go:141] libmachine: Running pre-create checks...
I1216 19:34:57.439100 14891 main.go:141] libmachine: (addons-618388) Calling .PreCreateCheck
I1216 19:34:57.439677 14891 main.go:141] libmachine: (addons-618388) Calling .GetConfigRaw
I1216 19:34:57.440093 14891 main.go:141] libmachine: Creating machine...
I1216 19:34:57.440109 14891 main.go:141] libmachine: (addons-618388) Calling .Create
I1216 19:34:57.440267 14891 main.go:141] libmachine: (addons-618388) creating KVM machine...
I1216 19:34:57.440283 14891 main.go:141] libmachine: (addons-618388) creating network...
I1216 19:34:57.441565 14891 main.go:141] libmachine: (addons-618388) DBG | found existing default KVM network
I1216 19:34:57.442237 14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:34:57.442096 14914 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001231f0}
I1216 19:34:57.442278 14891 main.go:141] libmachine: (addons-618388) DBG | created network xml:
I1216 19:34:57.442301 14891 main.go:141] libmachine: (addons-618388) DBG | <network>
I1216 19:34:57.442311 14891 main.go:141] libmachine: (addons-618388) DBG | <name>mk-addons-618388</name>
I1216 19:34:57.442318 14891 main.go:141] libmachine: (addons-618388) DBG | <dns enable='no'/>
I1216 19:34:57.442323 14891 main.go:141] libmachine: (addons-618388) DBG |
I1216 19:34:57.442333 14891 main.go:141] libmachine: (addons-618388) DBG | <ip address='192.168.39.1' netmask='255.255.255.0'>
I1216 19:34:57.442346 14891 main.go:141] libmachine: (addons-618388) DBG | <dhcp>
I1216 19:34:57.442356 14891 main.go:141] libmachine: (addons-618388) DBG | <range start='192.168.39.2' end='192.168.39.253'/>
I1216 19:34:57.442369 14891 main.go:141] libmachine: (addons-618388) DBG | </dhcp>
I1216 19:34:57.442375 14891 main.go:141] libmachine: (addons-618388) DBG | </ip>
I1216 19:34:57.442383 14891 main.go:141] libmachine: (addons-618388) DBG |
I1216 19:34:57.442395 14891 main.go:141] libmachine: (addons-618388) DBG | </network>
I1216 19:34:57.442430 14891 main.go:141] libmachine: (addons-618388) DBG |
I1216 19:34:57.447723 14891 main.go:141] libmachine: (addons-618388) DBG | trying to create private KVM network mk-addons-618388 192.168.39.0/24...
I1216 19:34:57.511927 14891 main.go:141] libmachine: (addons-618388) DBG | private KVM network mk-addons-618388 192.168.39.0/24 created
I1216 19:34:57.511968 14891 main.go:141] libmachine: (addons-618388) setting up store path in /home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388 ...
I1216 19:34:57.511989 14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:34:57.511898 14914 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20091-7083/.minikube
I1216 19:34:57.512013 14891 main.go:141] libmachine: (addons-618388) building disk image from file:///home/jenkins/minikube-integration/20091-7083/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso
I1216 19:34:57.512032 14891 main.go:141] libmachine: (addons-618388) Downloading /home/jenkins/minikube-integration/20091-7083/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20091-7083/.minikube/cache/iso/amd64/minikube-v1.34.0-1734029574-20090-amd64.iso...
I1216 19:34:57.773312 14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:34:57.773193 14914 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa...
I1216 19:34:58.178894 14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:34:58.178744 14914 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/addons-618388.rawdisk...
I1216 19:34:58.178916 14891 main.go:141] libmachine: (addons-618388) DBG | Writing magic tar header
I1216 19:34:58.178925 14891 main.go:141] libmachine: (addons-618388) DBG | Writing SSH key tar header
I1216 19:34:58.178933 14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:34:58.178855 14914 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388 ...
I1216 19:34:58.178943 14891 main.go:141] libmachine: (addons-618388) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388
I1216 19:34:58.178952 14891 main.go:141] libmachine: (addons-618388) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20091-7083/.minikube/machines
I1216 19:34:58.178960 14891 main.go:141] libmachine: (addons-618388) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20091-7083/.minikube
I1216 19:34:58.178966 14891 main.go:141] libmachine: (addons-618388) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20091-7083
I1216 19:34:58.178975 14891 main.go:141] libmachine: (addons-618388) DBG | checking permissions on dir: /home/jenkins/minikube-integration
I1216 19:34:58.178992 14891 main.go:141] libmachine: (addons-618388) DBG | checking permissions on dir: /home/jenkins
I1216 19:34:58.178997 14891 main.go:141] libmachine: (addons-618388) DBG | checking permissions on dir: /home
I1216 19:34:58.179008 14891 main.go:141] libmachine: (addons-618388) DBG | skipping /home - not owner
I1216 19:34:58.179033 14891 main.go:141] libmachine: (addons-618388) setting executable bit set on /home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388 (perms=drwx------)
I1216 19:34:58.179068 14891 main.go:141] libmachine: (addons-618388) setting executable bit set on /home/jenkins/minikube-integration/20091-7083/.minikube/machines (perms=drwxr-xr-x)
I1216 19:34:58.179083 14891 main.go:141] libmachine: (addons-618388) setting executable bit set on /home/jenkins/minikube-integration/20091-7083/.minikube (perms=drwxr-xr-x)
I1216 19:34:58.179091 14891 main.go:141] libmachine: (addons-618388) setting executable bit set on /home/jenkins/minikube-integration/20091-7083 (perms=drwxrwxr-x)
I1216 19:34:58.179097 14891 main.go:141] libmachine: (addons-618388) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1216 19:34:58.179103 14891 main.go:141] libmachine: (addons-618388) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1216 19:34:58.179107 14891 main.go:141] libmachine: (addons-618388) creating domain...
I1216 19:34:58.180072 14891 main.go:141] libmachine: (addons-618388) define libvirt domain using xml:
I1216 19:34:58.180094 14891 main.go:141] libmachine: (addons-618388) <domain type='kvm'>
I1216 19:34:58.180104 14891 main.go:141] libmachine: (addons-618388) <name>addons-618388</name>
I1216 19:34:58.180110 14891 main.go:141] libmachine: (addons-618388) <memory unit='MiB'>4000</memory>
I1216 19:34:58.180127 14891 main.go:141] libmachine: (addons-618388) <vcpu>2</vcpu>
I1216 19:34:58.180135 14891 main.go:141] libmachine: (addons-618388) <features>
I1216 19:34:58.180143 14891 main.go:141] libmachine: (addons-618388) <acpi/>
I1216 19:34:58.180150 14891 main.go:141] libmachine: (addons-618388) <apic/>
I1216 19:34:58.180159 14891 main.go:141] libmachine: (addons-618388) <pae/>
I1216 19:34:58.180166 14891 main.go:141] libmachine: (addons-618388)
I1216 19:34:58.180174 14891 main.go:141] libmachine: (addons-618388) </features>
I1216 19:34:58.180186 14891 main.go:141] libmachine: (addons-618388) <cpu mode='host-passthrough'>
I1216 19:34:58.180198 14891 main.go:141] libmachine: (addons-618388)
I1216 19:34:58.180213 14891 main.go:141] libmachine: (addons-618388) </cpu>
I1216 19:34:58.180222 14891 main.go:141] libmachine: (addons-618388) <os>
I1216 19:34:58.180230 14891 main.go:141] libmachine: (addons-618388) <type>hvm</type>
I1216 19:34:58.180239 14891 main.go:141] libmachine: (addons-618388) <boot dev='cdrom'/>
I1216 19:34:58.180250 14891 main.go:141] libmachine: (addons-618388) <boot dev='hd'/>
I1216 19:34:58.180258 14891 main.go:141] libmachine: (addons-618388) <bootmenu enable='no'/>
I1216 19:34:58.180270 14891 main.go:141] libmachine: (addons-618388) </os>
I1216 19:34:58.180283 14891 main.go:141] libmachine: (addons-618388) <devices>
I1216 19:34:58.180297 14891 main.go:141] libmachine: (addons-618388) <disk type='file' device='cdrom'>
I1216 19:34:58.180323 14891 main.go:141] libmachine: (addons-618388) <source file='/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/boot2docker.iso'/>
I1216 19:34:58.180341 14891 main.go:141] libmachine: (addons-618388) <target dev='hdc' bus='scsi'/>
I1216 19:34:58.180349 14891 main.go:141] libmachine: (addons-618388) <readonly/>
I1216 19:34:58.180358 14891 main.go:141] libmachine: (addons-618388) </disk>
I1216 19:34:58.180404 14891 main.go:141] libmachine: (addons-618388) <disk type='file' device='disk'>
I1216 19:34:58.180431 14891 main.go:141] libmachine: (addons-618388) <driver name='qemu' type='raw' cache='default' io='threads' />
I1216 19:34:58.180443 14891 main.go:141] libmachine: (addons-618388) <source file='/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/addons-618388.rawdisk'/>
I1216 19:34:58.180451 14891 main.go:141] libmachine: (addons-618388) <target dev='hda' bus='virtio'/>
I1216 19:34:58.180457 14891 main.go:141] libmachine: (addons-618388) </disk>
I1216 19:34:58.180464 14891 main.go:141] libmachine: (addons-618388) <interface type='network'>
I1216 19:34:58.180470 14891 main.go:141] libmachine: (addons-618388) <source network='mk-addons-618388'/>
I1216 19:34:58.180477 14891 main.go:141] libmachine: (addons-618388) <model type='virtio'/>
I1216 19:34:58.180481 14891 main.go:141] libmachine: (addons-618388) </interface>
I1216 19:34:58.180488 14891 main.go:141] libmachine: (addons-618388) <interface type='network'>
I1216 19:34:58.180493 14891 main.go:141] libmachine: (addons-618388) <source network='default'/>
I1216 19:34:58.180500 14891 main.go:141] libmachine: (addons-618388) <model type='virtio'/>
I1216 19:34:58.180534 14891 main.go:141] libmachine: (addons-618388) </interface>
I1216 19:34:58.180551 14891 main.go:141] libmachine: (addons-618388) <serial type='pty'>
I1216 19:34:58.180558 14891 main.go:141] libmachine: (addons-618388) <target port='0'/>
I1216 19:34:58.180564 14891 main.go:141] libmachine: (addons-618388) </serial>
I1216 19:34:58.180576 14891 main.go:141] libmachine: (addons-618388) <console type='pty'>
I1216 19:34:58.180584 14891 main.go:141] libmachine: (addons-618388) <target type='serial' port='0'/>
I1216 19:34:58.180589 14891 main.go:141] libmachine: (addons-618388) </console>
I1216 19:34:58.180594 14891 main.go:141] libmachine: (addons-618388) <rng model='virtio'>
I1216 19:34:58.180600 14891 main.go:141] libmachine: (addons-618388) <backend model='random'>/dev/random</backend>
I1216 19:34:58.180606 14891 main.go:141] libmachine: (addons-618388) </rng>
I1216 19:34:58.180611 14891 main.go:141] libmachine: (addons-618388)
I1216 19:34:58.180621 14891 main.go:141] libmachine: (addons-618388)
I1216 19:34:58.180632 14891 main.go:141] libmachine: (addons-618388) </devices>
I1216 19:34:58.180639 14891 main.go:141] libmachine: (addons-618388) </domain>
I1216 19:34:58.180647 14891 main.go:141] libmachine: (addons-618388)
I1216 19:34:58.186519 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:38:4c:89 in network default
I1216 19:34:58.187149 14891 main.go:141] libmachine: (addons-618388) starting domain...
I1216 19:34:58.187163 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:34:58.187168 14891 main.go:141] libmachine: (addons-618388) ensuring networks are active...
I1216 19:34:58.187829 14891 main.go:141] libmachine: (addons-618388) Ensuring network default is active
I1216 19:34:58.188125 14891 main.go:141] libmachine: (addons-618388) Ensuring network mk-addons-618388 is active
I1216 19:34:58.188618 14891 main.go:141] libmachine: (addons-618388) getting domain XML...
I1216 19:34:58.189252 14891 main.go:141] libmachine: (addons-618388) creating domain...
I1216 19:34:59.611349 14891 main.go:141] libmachine: (addons-618388) waiting for IP...
I1216 19:34:59.612100 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:34:59.612509 14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
I1216 19:34:59.612563 14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:34:59.612505 14914 retry.go:31] will retry after 260.418297ms: waiting for domain to come up
I1216 19:34:59.875034 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:34:59.875546 14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
I1216 19:34:59.875577 14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:34:59.875496 14914 retry.go:31] will retry after 293.540026ms: waiting for domain to come up
I1216 19:35:00.171153 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:00.171578 14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
I1216 19:35:00.171622 14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:35:00.171567 14914 retry.go:31] will retry after 302.02571ms: waiting for domain to come up
I1216 19:35:00.474954 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:00.475449 14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
I1216 19:35:00.475482 14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:35:00.475429 14914 retry.go:31] will retry after 385.529875ms: waiting for domain to come up
I1216 19:35:00.863267 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:00.863723 14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
I1216 19:35:00.863767 14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:35:00.863717 14914 retry.go:31] will retry after 640.272037ms: waiting for domain to come up
I1216 19:35:01.505404 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:01.505803 14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
I1216 19:35:01.505837 14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:35:01.505774 14914 retry.go:31] will retry after 721.536466ms: waiting for domain to come up
I1216 19:35:02.229456 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:02.230068 14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
I1216 19:35:02.230098 14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:35:02.229987 14914 retry.go:31] will retry after 1.102160447s: waiting for domain to come up
I1216 19:35:03.334077 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:03.334523 14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
I1216 19:35:03.334550 14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:35:03.334486 14914 retry.go:31] will retry after 1.363083549s: waiting for domain to come up
I1216 19:35:04.699456 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:04.699858 14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
I1216 19:35:04.699880 14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:35:04.699834 14914 retry.go:31] will retry after 1.800012159s: waiting for domain to come up
I1216 19:35:06.501712 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:06.502102 14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
I1216 19:35:06.502129 14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:35:06.502082 14914 retry.go:31] will retry after 2.251346298s: waiting for domain to come up
I1216 19:35:08.755787 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:08.756226 14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
I1216 19:35:08.756258 14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:35:08.756200 14914 retry.go:31] will retry after 1.964356479s: waiting for domain to come up
I1216 19:35:10.722091 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:10.722561 14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
I1216 19:35:10.722589 14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:35:10.722540 14914 retry.go:31] will retry after 2.999608213s: waiting for domain to come up
I1216 19:35:13.724350 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:13.724852 14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
I1216 19:35:13.724876 14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:35:13.724797 14914 retry.go:31] will retry after 2.776458394s: waiting for domain to come up
I1216 19:35:16.504723 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:16.505155 14891 main.go:141] libmachine: (addons-618388) DBG | unable to find current IP address of domain addons-618388 in network mk-addons-618388
I1216 19:35:16.505173 14891 main.go:141] libmachine: (addons-618388) DBG | I1216 19:35:16.505131 14914 retry.go:31] will retry after 3.91215948s: waiting for domain to come up
I1216 19:35:20.421905 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:20.422432 14891 main.go:141] libmachine: (addons-618388) found domain IP: 192.168.39.82
I1216 19:35:20.422465 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has current primary IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:20.422473 14891 main.go:141] libmachine: (addons-618388) reserving static IP address...
I1216 19:35:20.422882 14891 main.go:141] libmachine: (addons-618388) DBG | unable to find host DHCP lease matching {name: "addons-618388", mac: "52:54:00:3b:31:2c", ip: "192.168.39.82"} in network mk-addons-618388
I1216 19:35:20.500737 14891 main.go:141] libmachine: (addons-618388) DBG | Getting to WaitForSSH function...
I1216 19:35:20.500774 14891 main.go:141] libmachine: (addons-618388) reserved static IP address 192.168.39.82 for domain addons-618388
I1216 19:35:20.500786 14891 main.go:141] libmachine: (addons-618388) waiting for SSH...
I1216 19:35:20.503156 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:20.503484 14891 main.go:141] libmachine: (addons-618388) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388
I1216 19:35:20.503516 14891 main.go:141] libmachine: (addons-618388) DBG | unable to find defined IP address of network mk-addons-618388 interface with MAC address 52:54:00:3b:31:2c
I1216 19:35:20.503742 14891 main.go:141] libmachine: (addons-618388) DBG | Using SSH client type: external
I1216 19:35:20.503763 14891 main.go:141] libmachine: (addons-618388) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa (-rw-------)
I1216 19:35:20.503913 14891 main.go:141] libmachine: (addons-618388) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa -p 22] /usr/bin/ssh <nil>}
I1216 19:35:20.503935 14891 main.go:141] libmachine: (addons-618388) DBG | About to run SSH command:
I1216 19:35:20.503947 14891 main.go:141] libmachine: (addons-618388) DBG | exit 0
I1216 19:35:20.515809 14891 main.go:141] libmachine: (addons-618388) DBG | SSH cmd err, output: exit status 255:
I1216 19:35:20.515834 14891 main.go:141] libmachine: (addons-618388) DBG | Error getting ssh command 'exit 0' : ssh command error:
I1216 19:35:20.515841 14891 main.go:141] libmachine: (addons-618388) DBG | command : exit 0
I1216 19:35:20.515851 14891 main.go:141] libmachine: (addons-618388) DBG | err : exit status 255
I1216 19:35:20.515859 14891 main.go:141] libmachine: (addons-618388) DBG | output :
I1216 19:35:23.517550 14891 main.go:141] libmachine: (addons-618388) DBG | Getting to WaitForSSH function...
I1216 19:35:23.520087 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:23.520478 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:23.520507 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:23.520686 14891 main.go:141] libmachine: (addons-618388) DBG | Using SSH client type: external
I1216 19:35:23.520710 14891 main.go:141] libmachine: (addons-618388) DBG | Using SSH private key: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa (-rw-------)
I1216 19:35:23.520738 14891 main.go:141] libmachine: (addons-618388) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.82 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa -p 22] /usr/bin/ssh <nil>}
I1216 19:35:23.520748 14891 main.go:141] libmachine: (addons-618388) DBG | About to run SSH command:
I1216 19:35:23.520763 14891 main.go:141] libmachine: (addons-618388) DBG | exit 0
I1216 19:35:23.643289 14891 main.go:141] libmachine: (addons-618388) DBG | SSH cmd err, output: <nil>:
I1216 19:35:23.643527 14891 main.go:141] libmachine: (addons-618388) KVM machine creation complete
I1216 19:35:23.644026 14891 main.go:141] libmachine: (addons-618388) Calling .GetConfigRaw
I1216 19:35:23.644581 14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
I1216 19:35:23.644808 14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
I1216 19:35:23.644951 14891 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I1216 19:35:23.644965 14891 main.go:141] libmachine: (addons-618388) Calling .GetState
I1216 19:35:23.646368 14891 main.go:141] libmachine: Detecting operating system of created instance...
I1216 19:35:23.646382 14891 main.go:141] libmachine: Waiting for SSH to be available...
I1216 19:35:23.646387 14891 main.go:141] libmachine: Getting to WaitForSSH function...
I1216 19:35:23.646392 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
I1216 19:35:23.648635 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:23.648998 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:23.649015 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:23.649156 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
I1216 19:35:23.649294 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:23.649430 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:23.649528 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
I1216 19:35:23.649748 14891 main.go:141] libmachine: Using SSH client type: native
I1216 19:35:23.649928 14891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil> [] 0s} 192.168.39.82 22 <nil> <nil>}
I1216 19:35:23.649938 14891 main.go:141] libmachine: About to run SSH command:
exit 0
I1216 19:35:23.754933 14891 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1216 19:35:23.754963 14891 main.go:141] libmachine: Detecting the provisioner...
I1216 19:35:23.754973 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
I1216 19:35:23.758121 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:23.758463 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:23.758494 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:23.758680 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
I1216 19:35:23.758975 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:23.759196 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:23.759407 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
I1216 19:35:23.759607 14891 main.go:141] libmachine: Using SSH client type: native
I1216 19:35:23.759788 14891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil> [] 0s} 192.168.39.82 22 <nil> <nil>}
I1216 19:35:23.759801 14891 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I1216 19:35:23.860602 14891 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2023.02.9-dirty
ID=buildroot
VERSION_ID=2023.02.9
PRETTY_NAME="Buildroot 2023.02.9"
I1216 19:35:23.860652 14891 main.go:141] libmachine: found compatible host: buildroot
I1216 19:35:23.860661 14891 main.go:141] libmachine: Provisioning with buildroot...
I1216 19:35:23.860669 14891 main.go:141] libmachine: (addons-618388) Calling .GetMachineName
I1216 19:35:23.860903 14891 buildroot.go:166] provisioning hostname "addons-618388"
I1216 19:35:23.860928 14891 main.go:141] libmachine: (addons-618388) Calling .GetMachineName
I1216 19:35:23.861118 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
I1216 19:35:23.863908 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:23.864296 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:23.864320 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:23.864457 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
I1216 19:35:23.864647 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:23.864834 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:23.864976 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
I1216 19:35:23.865186 14891 main.go:141] libmachine: Using SSH client type: native
I1216 19:35:23.865399 14891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil> [] 0s} 192.168.39.82 22 <nil> <nil>}
I1216 19:35:23.865419 14891 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-618388 && echo "addons-618388" | sudo tee /etc/hostname
I1216 19:35:23.984619 14891 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-618388
I1216 19:35:23.984653 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
I1216 19:35:23.987150 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:23.987561 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:23.987593 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:23.987903 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
I1216 19:35:23.988093 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:23.988215 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:23.988342 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
I1216 19:35:23.988539 14891 main.go:141] libmachine: Using SSH client type: native
I1216 19:35:23.988750 14891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil> [] 0s} 192.168.39.82 22 <nil> <nil>}
I1216 19:35:23.988773 14891 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-618388' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-618388/g' /etc/hosts;
else
echo '127.0.1.1 addons-618388' | sudo tee -a /etc/hosts;
fi
fi
I1216 19:35:24.104303 14891 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1216 19:35:24.104333 14891 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20091-7083/.minikube CaCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20091-7083/.minikube}
I1216 19:35:24.104373 14891 buildroot.go:174] setting up certificates
I1216 19:35:24.104384 14891 provision.go:84] configureAuth start
I1216 19:35:24.104394 14891 main.go:141] libmachine: (addons-618388) Calling .GetMachineName
I1216 19:35:24.104666 14891 main.go:141] libmachine: (addons-618388) Calling .GetIP
I1216 19:35:24.107137 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:24.107483 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:24.107510 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:24.107662 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
I1216 19:35:24.109717 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:24.110022 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:24.110052 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:24.110134 14891 provision.go:143] copyHostCerts
I1216 19:35:24.110210 14891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/ca.pem (1082 bytes)
I1216 19:35:24.110377 14891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/cert.pem (1123 bytes)
I1216 19:35:24.110459 14891 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20091-7083/.minikube/key.pem (1679 bytes)
I1216 19:35:24.110524 14891 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem org=jenkins.addons-618388 san=[127.0.0.1 192.168.39.82 addons-618388 localhost minikube]
I1216 19:35:24.247178 14891 provision.go:177] copyRemoteCerts
I1216 19:35:24.247231 14891 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1216 19:35:24.247265 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
I1216 19:35:24.249816 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:24.250144 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:24.250176 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:24.250346 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
I1216 19:35:24.250554 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:24.250695 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
I1216 19:35:24.250830 14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
I1216 19:35:24.330115 14891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1216 19:35:24.356239 14891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1216 19:35:24.412775 14891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1216 19:35:24.442720 14891 provision.go:87] duration metric: took 338.323541ms to configureAuth
I1216 19:35:24.442750 14891 buildroot.go:189] setting minikube options for container-runtime
I1216 19:35:24.442932 14891 config.go:182] Loaded profile config "addons-618388": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I1216 19:35:24.443023 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
I1216 19:35:24.445502 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:24.445947 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:24.445974 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:24.446221 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
I1216 19:35:24.446397 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:24.446624 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:24.446773 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
I1216 19:35:24.446965 14891 main.go:141] libmachine: Using SSH client type: native
I1216 19:35:24.447142 14891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil> [] 0s} 192.168.39.82 22 <nil> <nil>}
I1216 19:35:24.447158 14891 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1216 19:35:24.949923 14891 main.go:141] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1216 19:35:24.949948 14891 main.go:141] libmachine: Checking connection to Docker...
I1216 19:35:24.949957 14891 main.go:141] libmachine: (addons-618388) Calling .GetURL
I1216 19:35:24.951452 14891 main.go:141] libmachine: (addons-618388) DBG | using libvirt version 6000000
I1216 19:35:24.953565 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:24.953916 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:24.953943 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:24.954208 14891 main.go:141] libmachine: Docker is up and running!
I1216 19:35:24.954232 14891 main.go:141] libmachine: Reticulating splines...
I1216 19:35:24.954240 14891 client.go:171] duration metric: took 27.998550144s to LocalClient.Create
I1216 19:35:24.954259 14891 start.go:167] duration metric: took 27.998621198s to libmachine.API.Create "addons-618388"
I1216 19:35:24.954271 14891 start.go:293] postStartSetup for "addons-618388" (driver="kvm2")
I1216 19:35:24.954284 14891 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1216 19:35:24.954314 14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
I1216 19:35:24.954549 14891 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1216 19:35:24.954569 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
I1216 19:35:24.956866 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:24.957175 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:24.957202 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:24.957329 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
I1216 19:35:24.957495 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:24.957640 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
I1216 19:35:24.957791 14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
I1216 19:35:25.038209 14891 ssh_runner.go:195] Run: cat /etc/os-release
I1216 19:35:25.042942 14891 info.go:137] Remote host: Buildroot 2023.02.9
I1216 19:35:25.042982 14891 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/addons for local assets ...
I1216 19:35:25.043061 14891 filesync.go:126] Scanning /home/jenkins/minikube-integration/20091-7083/.minikube/files for local assets ...
I1216 19:35:25.043097 14891 start.go:296] duration metric: took 88.819464ms for postStartSetup
I1216 19:35:25.043133 14891 main.go:141] libmachine: (addons-618388) Calling .GetConfigRaw
I1216 19:35:25.043802 14891 main.go:141] libmachine: (addons-618388) Calling .GetIP
I1216 19:35:25.046709 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:25.047131 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:25.047162 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:25.047428 14891 profile.go:143] Saving config to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/config.json ...
I1216 19:35:25.047665 14891 start.go:128] duration metric: took 28.110128367s to createHost
I1216 19:35:25.047694 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
I1216 19:35:25.050107 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:25.050549 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:25.050582 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:25.050754 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
I1216 19:35:25.050953 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:25.051115 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:25.051285 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
I1216 19:35:25.051457 14891 main.go:141] libmachine: Using SSH client type: native
I1216 19:35:25.051611 14891 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x867100] 0x869de0 <nil> [] 0s} 192.168.39.82 22 <nil> <nil>}
I1216 19:35:25.051628 14891 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I1216 19:35:25.152694 14891 main.go:141] libmachine: SSH cmd err, output: <nil>: 1734377725.128871797
I1216 19:35:25.152722 14891 fix.go:216] guest clock: 1734377725.128871797
I1216 19:35:25.152734 14891 fix.go:229] Guest: 2024-12-16 19:35:25.128871797 +0000 UTC Remote: 2024-12-16 19:35:25.047680692 +0000 UTC m=+28.213613803 (delta=81.191105ms)
I1216 19:35:25.152759 14891 fix.go:200] guest clock delta is within tolerance: 81.191105ms
I1216 19:35:25.152765 14891 start.go:83] releasing machines lock for "addons-618388", held for 28.215298778s
I1216 19:35:25.152790 14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
I1216 19:35:25.153051 14891 main.go:141] libmachine: (addons-618388) Calling .GetIP
I1216 19:35:25.156351 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:25.156727 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:25.156756 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:25.156971 14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
I1216 19:35:25.157496 14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
I1216 19:35:25.157684 14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
I1216 19:35:25.157797 14891 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1216 19:35:25.157855 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
I1216 19:35:25.157874 14891 ssh_runner.go:195] Run: cat /version.json
I1216 19:35:25.157889 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
I1216 19:35:25.160387 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:25.160632 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:25.160727 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:25.160770 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:25.160898 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
I1216 19:35:25.161012 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:25.161050 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:25.161079 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:25.161271 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
I1216 19:35:25.161289 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
I1216 19:35:25.161471 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:25.161468 14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
I1216 19:35:25.161641 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
I1216 19:35:25.161814 14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
I1216 19:35:25.264508 14891 ssh_runner.go:195] Run: systemctl --version
I1216 19:35:25.270933 14891 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1216 19:35:25.435897 14891 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1216 19:35:25.442518 14891 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1216 19:35:25.442585 14891 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1216 19:35:25.460231 14891 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1216 19:35:25.460257 14891 start.go:495] detecting cgroup driver to use...
I1216 19:35:25.460316 14891 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1216 19:35:25.477318 14891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1216 19:35:25.491221 14891 docker.go:217] disabling cri-docker service (if available) ...
I1216 19:35:25.491318 14891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1216 19:35:25.506165 14891 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1216 19:35:25.520318 14891 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1216 19:35:25.645493 14891 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1216 19:35:25.792329 14891 docker.go:233] disabling docker service ...
I1216 19:35:25.792407 14891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1216 19:35:25.807438 14891 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1216 19:35:25.821221 14891 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1216 19:35:25.954091 14891 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1216 19:35:26.074391 14891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1216 19:35:26.088947 14891 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1216 19:35:26.108956 14891 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
I1216 19:35:26.109039 14891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
I1216 19:35:26.120980 14891 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1216 19:35:26.121101 14891 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1216 19:35:26.132546 14891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1216 19:35:26.144241 14891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1216 19:35:26.156253 14891 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1216 19:35:26.168305 14891 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1216 19:35:26.179945 14891 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1216 19:35:26.198487 14891 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1216 19:35:26.210113 14891 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1216 19:35:26.220621 14891 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1216 19:35:26.220695 14891 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1216 19:35:26.237391 14891 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1216 19:35:26.249281 14891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1216 19:35:26.382408 14891 ssh_runner.go:195] Run: sudo systemctl restart crio
I1216 19:35:26.484383 14891 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
I1216 19:35:26.484480 14891 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1216 19:35:26.490086 14891 start.go:563] Will wait 60s for crictl version
I1216 19:35:26.490177 14891 ssh_runner.go:195] Run: which crictl
I1216 19:35:26.494182 14891 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1216 19:35:26.533934 14891 start.go:579] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1216 19:35:26.534050 14891 ssh_runner.go:195] Run: crio --version
I1216 19:35:26.562928 14891 ssh_runner.go:195] Run: crio --version
I1216 19:35:26.594310 14891 out.go:177] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
I1216 19:35:26.595780 14891 main.go:141] libmachine: (addons-618388) Calling .GetIP
I1216 19:35:26.598405 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:26.598711 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:26.598730 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:26.598965 14891 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1216 19:35:26.603547 14891 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1216 19:35:26.616735 14891 kubeadm.go:883] updating cluster {Name:addons-618388 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.
0 ClusterName:addons-618388 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1216 19:35:26.616834 14891 preload.go:131] Checking if preload exists for k8s version v1.32.0 and runtime crio
I1216 19:35:26.616877 14891 ssh_runner.go:195] Run: sudo crictl images --output json
I1216 19:35:26.651197 14891 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
I1216 19:35:26.651283 14891 ssh_runner.go:195] Run: which lz4
I1216 19:35:26.655411 14891 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1216 19:35:26.659872 14891 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1216 19:35:26.659914 14891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
I1216 19:35:28.053331 14891 crio.go:462] duration metric: took 1.397942286s to copy over tarball
I1216 19:35:28.053420 14891 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1216 19:35:30.377644 14891 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.324187999s)
I1216 19:35:30.377674 14891 crio.go:469] duration metric: took 2.324307812s to extract the tarball
I1216 19:35:30.377684 14891 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1216 19:35:30.421523 14891 ssh_runner.go:195] Run: sudo crictl images --output json
I1216 19:35:30.475916 14891 crio.go:514] all images are preloaded for cri-o runtime.
I1216 19:35:30.475941 14891 cache_images.go:84] Images are preloaded, skipping loading
I1216 19:35:30.475949 14891 kubeadm.go:934] updating node { 192.168.39.82 8443 v1.32.0 crio true true} ...
I1216 19:35:30.476038 14891 kubeadm.go:946] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-618388 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.82
[Install]
config:
{KubernetesVersion:v1.32.0 ClusterName:addons-618388 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1216 19:35:30.476114 14891 ssh_runner.go:195] Run: crio config
I1216 19:35:30.533078 14891 cni.go:84] Creating CNI manager for ""
I1216 19:35:30.533106 14891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1216 19:35:30.533118 14891 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I1216 19:35:30.533149 14891 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.82 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-618388 NodeName:addons-618388 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.82"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.82 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1216 19:35:30.533301 14891 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.82
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-618388"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.82"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.82"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1216 19:35:30.533372 14891 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
I1216 19:35:30.544648 14891 binaries.go:44] Found k8s binaries, skipping transfer
I1216 19:35:30.544717 14891 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1216 19:35:30.555742 14891 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
I1216 19:35:30.576665 14891 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1216 19:35:30.595189 14891 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
I1216 19:35:30.613398 14891 ssh_runner.go:195] Run: grep 192.168.39.82 control-plane.minikube.internal$ /etc/hosts
I1216 19:35:30.617840 14891 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.82 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1216 19:35:30.631230 14891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1216 19:35:30.778750 14891 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1216 19:35:30.797497 14891 certs.go:68] Setting up /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388 for IP: 192.168.39.82
I1216 19:35:30.797522 14891 certs.go:194] generating shared ca certs ...
I1216 19:35:30.797541 14891 certs.go:226] acquiring lock for ca certs: {Name:mk7f8f83a04be3d39897a025f51d4d8228b5a509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 19:35:30.797677 14891 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key
I1216 19:35:31.087805 14891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt ...
I1216 19:35:31.087836 14891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt: {Name:mk8223f4a742e4125b8daa3a7e32f17d883b5f99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 19:35:31.088009 14891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key ...
I1216 19:35:31.088019 14891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key: {Name:mk35573315444553834e6f18cd2b940679ee0f07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 19:35:31.088091 14891 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key
I1216 19:35:31.149595 14891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt ...
I1216 19:35:31.149624 14891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt: {Name:mk6a3f6f336ce262b90176d6c96cfa7c898ea7bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 19:35:31.149782 14891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key ...
I1216 19:35:31.149793 14891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key: {Name:mkf8d43e410cad4aa5548e27f7459158da163348 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 19:35:31.149856 14891 certs.go:256] generating profile certs ...
I1216 19:35:31.149924 14891 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.key
I1216 19:35:31.149946 14891 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt with IP's: []
I1216 19:35:31.259760 14891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt ...
I1216 19:35:31.259789 14891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.crt: {Name:mkab02c8a2b648cfe34c559214fe91fe368330f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 19:35:31.259942 14891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.key ...
I1216 19:35:31.259953 14891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/client.key: {Name:mk64939933222e9d48652e54c5f88a941ed2eb34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 19:35:31.260020 14891 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/apiserver.key.3fc635ee
I1216 19:35:31.260037 14891 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/apiserver.crt.3fc635ee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.82]
I1216 19:35:31.349697 14891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/apiserver.crt.3fc635ee ...
I1216 19:35:31.349724 14891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/apiserver.crt.3fc635ee: {Name:mkdd15677769fb03ff0f10d64222030963dea71c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 19:35:31.349861 14891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/apiserver.key.3fc635ee ...
I1216 19:35:31.349890 14891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/apiserver.key.3fc635ee: {Name:mke374d02b3d78d575e5dab7ea720b1d4fd93514 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 19:35:31.349958 14891 certs.go:381] copying /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/apiserver.crt.3fc635ee -> /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/apiserver.crt
I1216 19:35:31.350055 14891 certs.go:385] copying /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/apiserver.key.3fc635ee -> /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/apiserver.key
I1216 19:35:31.350116 14891 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/proxy-client.key
I1216 19:35:31.350133 14891 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/proxy-client.crt with IP's: []
I1216 19:35:31.562828 14891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/proxy-client.crt ...
I1216 19:35:31.562862 14891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/proxy-client.crt: {Name:mkde20c8645a9d5d6ee2aaa14492fa5df2fc991a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 19:35:31.563049 14891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/proxy-client.key ...
I1216 19:35:31.563065 14891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/proxy-client.key: {Name:mkaafb8d808060db36b0da2fab045a5e8b677276 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 19:35:31.563289 14891 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca-key.pem (1679 bytes)
I1216 19:35:31.563336 14891 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/ca.pem (1082 bytes)
I1216 19:35:31.563371 14891 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/cert.pem (1123 bytes)
I1216 19:35:31.563408 14891 certs.go:484] found cert: /home/jenkins/minikube-integration/20091-7083/.minikube/certs/key.pem (1679 bytes)
I1216 19:35:31.563963 14891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1216 19:35:31.600224 14891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1216 19:35:31.624564 14891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1216 19:35:31.654903 14891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1216 19:35:31.685331 14891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1216 19:35:31.714782 14891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1216 19:35:31.744799 14891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1216 19:35:31.774469 14891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/profiles/addons-618388/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1216 19:35:31.803877 14891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20091-7083/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1216 19:35:31.830743 14891 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1216 19:35:31.848650 14891 ssh_runner.go:195] Run: openssl version
I1216 19:35:31.854999 14891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1216 19:35:31.866738 14891 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1216 19:35:31.871822 14891 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 19:35 /usr/share/ca-certificates/minikubeCA.pem
I1216 19:35:31.871904 14891 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1216 19:35:31.878205 14891 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1216 19:35:31.889756 14891 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1216 19:35:31.894644 14891 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1216 19:35:31.894693 14891 kubeadm.go:392] StartCluster: {Name:addons-618388 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/20090/minikube-v1.34.0-1734029574-20090-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 C
lusterName:addons-618388 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1216 19:35:31.894758 14891 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1216 19:35:31.894798 14891 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1216 19:35:31.939435 14891 cri.go:89] found id: ""
I1216 19:35:31.939512 14891 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1216 19:35:31.949992 14891 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1216 19:35:31.960189 14891 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1216 19:35:31.970213 14891 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1216 19:35:31.970234 14891 kubeadm.go:157] found existing configuration files:
I1216 19:35:31.970273 14891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1216 19:35:31.980228 14891 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1216 19:35:31.980320 14891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1216 19:35:31.990575 14891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1216 19:35:32.000543 14891 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1216 19:35:32.000597 14891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1216 19:35:32.010508 14891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1216 19:35:32.019843 14891 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1216 19:35:32.019902 14891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1216 19:35:32.029538 14891 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1216 19:35:32.038950 14891 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1216 19:35:32.039021 14891 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1216 19:35:32.048432 14891 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1216 19:35:32.101787 14891 kubeadm.go:310] [init] Using Kubernetes version: v1.32.0
I1216 19:35:32.101920 14891 kubeadm.go:310] [preflight] Running pre-flight checks
I1216 19:35:32.202928 14891 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I1216 19:35:32.203087 14891 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1216 19:35:32.203266 14891 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1216 19:35:32.211479 14891 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1216 19:35:32.379326 14891 out.go:235] - Generating certificates and keys ...
I1216 19:35:32.379454 14891 kubeadm.go:310] [certs] Using existing ca certificate authority
I1216 19:35:32.379511 14891 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I1216 19:35:32.454399 14891 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I1216 19:35:32.870562 14891 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I1216 19:35:32.990656 14891 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I1216 19:35:33.186140 14891 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I1216 19:35:33.323504 14891 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I1216 19:35:33.323662 14891 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-618388 localhost] and IPs [192.168.39.82 127.0.0.1 ::1]
I1216 19:35:33.491338 14891 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I1216 19:35:33.491521 14891 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-618388 localhost] and IPs [192.168.39.82 127.0.0.1 ::1]
I1216 19:35:33.644477 14891 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I1216 19:35:33.846194 14891 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I1216 19:35:34.027988 14891 kubeadm.go:310] [certs] Generating "sa" key and public key
I1216 19:35:34.028071 14891 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1216 19:35:34.146136 14891 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I1216 19:35:34.260160 14891 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1216 19:35:34.421396 14891 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1216 19:35:34.669541 14891 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1216 19:35:34.863801 14891 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1216 19:35:34.864284 14891 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1216 19:35:34.866624 14891 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1216 19:35:34.868564 14891 out.go:235] - Booting up control plane ...
I1216 19:35:34.868688 14891 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1216 19:35:34.868801 14891 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1216 19:35:34.868900 14891 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1216 19:35:34.884169 14891 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1216 19:35:34.890798 14891 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1216 19:35:34.890855 14891 kubeadm.go:310] [kubelet-start] Starting the kubelet
I1216 19:35:35.026773 14891 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1216 19:35:35.027989 14891 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1216 19:35:35.528682 14891 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.301534ms
I1216 19:35:35.528811 14891 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I1216 19:35:40.530645 14891 kubeadm.go:310] [api-check] The API server is healthy after 5.002453214s
I1216 19:35:40.542897 14891 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1216 19:35:40.562835 14891 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1216 19:35:40.597774 14891 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I1216 19:35:40.598017 14891 kubeadm.go:310] [mark-control-plane] Marking the node addons-618388 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1216 19:35:40.611055 14891 kubeadm.go:310] [bootstrap-token] Using token: xt4tac.l3e3u4qwnc85x3px
I1216 19:35:40.612916 14891 out.go:235] - Configuring RBAC rules ...
I1216 19:35:40.613085 14891 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1216 19:35:40.618695 14891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1216 19:35:40.625614 14891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1216 19:35:40.629599 14891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1216 19:35:40.636724 14891 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1216 19:35:40.640252 14891 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1216 19:35:40.934793 14891 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1216 19:35:41.368529 14891 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I1216 19:35:41.940442 14891 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I1216 19:35:41.942708 14891 kubeadm.go:310]
I1216 19:35:41.942806 14891 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I1216 19:35:41.942818 14891 kubeadm.go:310]
I1216 19:35:41.942942 14891 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I1216 19:35:41.942954 14891 kubeadm.go:310]
I1216 19:35:41.942985 14891 kubeadm.go:310] mkdir -p $HOME/.kube
I1216 19:35:41.944078 14891 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1216 19:35:41.944163 14891 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1216 19:35:41.944175 14891 kubeadm.go:310]
I1216 19:35:41.944239 14891 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I1216 19:35:41.944249 14891 kubeadm.go:310]
I1216 19:35:41.944322 14891 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I1216 19:35:41.944332 14891 kubeadm.go:310]
I1216 19:35:41.944413 14891 kubeadm.go:310] You should now deploy a pod network to the cluster.
I1216 19:35:41.944541 14891 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1216 19:35:41.944657 14891 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1216 19:35:41.944674 14891 kubeadm.go:310]
I1216 19:35:41.944775 14891 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I1216 19:35:41.944863 14891 kubeadm.go:310] and service account keys on each node and then running the following as root:
I1216 19:35:41.944874 14891 kubeadm.go:310]
I1216 19:35:41.945002 14891 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xt4tac.l3e3u4qwnc85x3px \
I1216 19:35:41.945157 14891 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735 \
I1216 19:35:41.945190 14891 kubeadm.go:310] --control-plane
I1216 19:35:41.945200 14891 kubeadm.go:310]
I1216 19:35:41.945338 14891 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I1216 19:35:41.945351 14891 kubeadm.go:310]
I1216 19:35:41.945460 14891 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xt4tac.l3e3u4qwnc85x3px \
I1216 19:35:41.945610 14891 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:e03b60b144334bf383a3d22daeca854a6b4004373f1847ba3afcb85a998b5735
I1216 19:35:41.946217 14891 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1216 19:35:41.947689 14891 cni.go:84] Creating CNI manager for ""
I1216 19:35:41.947704 14891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1216 19:35:41.949329 14891 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I1216 19:35:41.950698 14891 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1216 19:35:41.962591 14891 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1216 19:35:41.982892 14891 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1216 19:35:41.983006 14891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 19:35:41.983029 14891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-618388 minikube.k8s.io/updated_at=2024_12_16T19_35_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477 minikube.k8s.io/name=addons-618388 minikube.k8s.io/primary=true
I1216 19:35:42.006830 14891 ops.go:34] apiserver oom_adj: -16
I1216 19:35:42.132903 14891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 19:35:42.633212 14891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 19:35:43.133717 14891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 19:35:43.633740 14891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 19:35:44.133125 14891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 19:35:44.633784 14891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 19:35:45.133784 14891 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1216 19:35:45.217213 14891 kubeadm.go:1113] duration metric: took 3.234269906s to wait for elevateKubeSystemPrivileges
I1216 19:35:45.217247 14891 kubeadm.go:394] duration metric: took 13.322556676s to StartCluster
I1216 19:35:45.217268 14891 settings.go:142] acquiring lock: {Name:mke62e1d1fa6bfae09410847a3fc6f95d0bbbd11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 19:35:45.217414 14891 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20091-7083/kubeconfig
I1216 19:35:45.217794 14891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20091-7083/kubeconfig: {Name:mk67073c6dc9abd712825d4490d6430745897f27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1216 19:35:45.217985 14891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1216 19:35:45.218007 14891 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.82 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
I1216 19:35:45.218124 14891 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1216 19:35:45.218249 14891 config.go:182] Loaded profile config "addons-618388": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I1216 19:35:45.218260 14891 addons.go:69] Setting yakd=true in profile "addons-618388"
I1216 19:35:45.218267 14891 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-618388"
I1216 19:35:45.218290 14891 addons.go:234] Setting addon yakd=true in "addons-618388"
I1216 19:35:45.218287 14891 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-618388"
I1216 19:35:45.218310 14891 addons.go:69] Setting metrics-server=true in profile "addons-618388"
I1216 19:35:45.218314 14891 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-618388"
I1216 19:35:45.218321 14891 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-618388"
I1216 19:35:45.218325 14891 addons.go:234] Setting addon metrics-server=true in "addons-618388"
I1216 19:35:45.218330 14891 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-618388"
I1216 19:35:45.218326 14891 addons.go:69] Setting cloud-spanner=true in profile "addons-618388"
I1216 19:35:45.218341 14891 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-618388"
I1216 19:35:45.218347 14891 addons.go:234] Setting addon cloud-spanner=true in "addons-618388"
I1216 19:35:45.218347 14891 host.go:66] Checking if "addons-618388" exists ...
I1216 19:35:45.218358 14891 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-618388"
I1216 19:35:45.218363 14891 host.go:66] Checking if "addons-618388" exists ...
I1216 19:35:45.218347 14891 host.go:66] Checking if "addons-618388" exists ...
I1216 19:35:45.218366 14891 addons.go:69] Setting default-storageclass=true in profile "addons-618388"
I1216 19:35:45.218568 14891 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-618388"
I1216 19:35:45.218322 14891 host.go:66] Checking if "addons-618388" exists ...
I1216 19:35:45.218811 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.218832 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.218850 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.218866 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.218371 14891 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-618388"
I1216 19:35:45.218374 14891 host.go:66] Checking if "addons-618388" exists ...
I1216 19:35:45.218948 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.218986 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.218376 14891 addons.go:69] Setting gcp-auth=true in profile "addons-618388"
I1216 19:35:45.219070 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.218953 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.218384 14891 addons.go:69] Setting ingress=true in profile "addons-618388"
I1216 19:35:45.219097 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.219109 14891 addons.go:234] Setting addon ingress=true in "addons-618388"
I1216 19:35:45.218384 14891 addons.go:69] Setting storage-provisioner=true in profile "addons-618388"
I1216 19:35:45.219124 14891 addons.go:234] Setting addon storage-provisioner=true in "addons-618388"
I1216 19:35:45.218388 14891 addons.go:69] Setting ingress-dns=true in profile "addons-618388"
I1216 19:35:45.219165 14891 addons.go:234] Setting addon ingress-dns=true in "addons-618388"
I1216 19:35:45.219204 14891 host.go:66] Checking if "addons-618388" exists ...
I1216 19:35:45.219455 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.219488 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.219576 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.219070 14891 mustload.go:65] Loading cluster: addons-618388
I1216 19:35:45.219613 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.219583 14891 host.go:66] Checking if "addons-618388" exists ...
I1216 19:35:45.218390 14891 addons.go:69] Setting inspektor-gadget=true in profile "addons-618388"
I1216 19:35:45.219741 14891 addons.go:234] Setting addon inspektor-gadget=true in "addons-618388"
I1216 19:35:45.218347 14891 host.go:66] Checking if "addons-618388" exists ...
I1216 19:35:45.219774 14891 config.go:182] Loaded profile config "addons-618388": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
I1216 19:35:45.220016 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.220052 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.218395 14891 addons.go:69] Setting volcano=true in profile "addons-618388"
I1216 19:35:45.220123 14891 addons.go:234] Setting addon volcano=true in "addons-618388"
I1216 19:35:45.220141 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.220200 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.220246 14891 host.go:66] Checking if "addons-618388" exists ...
I1216 19:35:45.220337 14891 host.go:66] Checking if "addons-618388" exists ...
I1216 19:35:45.220576 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.220616 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.220155 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.220716 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.219624 14891 host.go:66] Checking if "addons-618388" exists ...
I1216 19:35:45.220804 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.220862 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.219146 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.221270 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.221314 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.218400 14891 addons.go:69] Setting volumesnapshots=true in profile "addons-618388"
I1216 19:35:45.223835 14891 addons.go:234] Setting addon volumesnapshots=true in "addons-618388"
I1216 19:35:45.223877 14891 host.go:66] Checking if "addons-618388" exists ...
I1216 19:35:45.224274 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.224307 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.227556 14891 out.go:177] * Verifying Kubernetes components...
I1216 19:35:45.218379 14891 addons.go:69] Setting registry=true in profile "addons-618388"
I1216 19:35:45.227892 14891 addons.go:234] Setting addon registry=true in "addons-618388"
I1216 19:35:45.227937 14891 host.go:66] Checking if "addons-618388" exists ...
I1216 19:35:45.228364 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.228410 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.238268 14891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1216 19:35:45.240677 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35731
I1216 19:35:45.240829 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43755
I1216 19:35:45.240997 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39977
I1216 19:35:45.241471 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.241600 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.241742 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45505
I1216 19:35:45.241982 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.241996 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.242055 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.242119 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.243018 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.243037 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.243162 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.243177 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.243232 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.243396 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.243427 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.243478 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.243517 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.244071 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.244104 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.259392 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42639
I1216 19:35:45.259393 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.259407 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41145
I1216 19:35:45.259780 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.259819 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.259903 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.259944 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.272019 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.272089 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.259779 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.272338 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.272431 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37773
I1216 19:35:45.272433 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33589
I1216 19:35:45.272605 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.272851 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.272984 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.273041 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.273070 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.273606 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.273621 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.273686 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.274485 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.274514 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.274640 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.274650 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.274709 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.274856 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.274916 14891 main.go:141] libmachine: (addons-618388) Calling .GetState
I1216 19:35:45.274958 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.275417 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.275452 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.275987 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.276024 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.278411 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.282920 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38029
I1216 19:35:45.283475 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.284033 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.284058 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.284445 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.284619 14891 main.go:141] libmachine: (addons-618388) Calling .GetState
I1216 19:35:45.288176 14891 addons.go:234] Setting addon default-storageclass=true in "addons-618388"
I1216 19:35:45.288218 14891 host.go:66] Checking if "addons-618388" exists ...
I1216 19:35:45.288590 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.288626 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.288724 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34905
I1216 19:35:45.292103 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33465
I1216 19:35:45.292677 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.293222 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.293251 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.293658 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.293909 14891 main.go:141] libmachine: (addons-618388) Calling .GetState
I1216 19:35:45.294667 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46397
I1216 19:35:45.295147 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.295739 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.295756 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.296142 14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
I1216 19:35:45.296208 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.296590 14891 main.go:141] libmachine: (addons-618388) Calling .GetState
I1216 19:35:45.298369 14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
I1216 19:35:45.298427 14891 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I1216 19:35:45.298747 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36303
I1216 19:35:45.300285 14891 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1216 19:35:45.300308 14891 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1216 19:35:45.300335 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
I1216 19:35:45.300404 14891 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
I1216 19:35:45.301881 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.302084 14891 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1216 19:35:45.302103 14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1216 19:35:45.302123 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
I1216 19:35:45.303198 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44077
I1216 19:35:45.303393 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39303
I1216 19:35:45.303649 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.312020 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.312062 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:45.312092 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.312238 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.312249 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.312839 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.312881 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.315747 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33495
I1216 19:35:45.317481 14891 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-618388"
I1216 19:35:45.317540 14891 host.go:66] Checking if "addons-618388" exists ...
I1216 19:35:45.318020 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.318072 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.318370 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34299
I1216 19:35:45.318689 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.318714 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:45.318747 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.318786 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
I1216 19:35:45.318845 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
I1216 19:35:45.318909 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.318938 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.319003 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46107
I1216 19:35:45.319060 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.319086 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.319610 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:45.319678 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.319787 14891 main.go:141] libmachine: (addons-618388) Calling .GetState
I1216 19:35:45.319811 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41847
I1216 19:35:45.319902 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:45.320230 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.320251 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.320360 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
I1216 19:35:45.320524 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.320544 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.320604 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
I1216 19:35:45.320738 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.320763 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.320844 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.320886 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.320906 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.320928 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.320988 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.321038 14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
I1216 19:35:45.321399 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.321475 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.321492 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.321560 14891 main.go:141] libmachine: (addons-618388) Calling .GetState
I1216 19:35:45.321604 14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
I1216 19:35:45.321906 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.322268 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.322350 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.323033 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.322931 14891 host.go:66] Checking if "addons-618388" exists ...
I1216 19:35:45.323295 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.323688 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.323728 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.323746 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.323779 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.323788 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.323801 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.323963 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.323988 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.324095 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.324133 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.324335 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.324540 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.324860 14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
I1216 19:35:45.324999 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.325048 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.325128 14891 main.go:141] libmachine: (addons-618388) Calling .GetState
I1216 19:35:45.325927 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.325970 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.326771 14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
I1216 19:35:45.328162 14891 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
I1216 19:35:45.329392 14891 out.go:177] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1216 19:35:45.331044 14891 out.go:177] - Using image docker.io/registry:2.8.3
I1216 19:35:45.331171 14891 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1216 19:35:45.331190 14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1216 19:35:45.331220 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
I1216 19:35:45.332051 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43309
I1216 19:35:45.332659 14891 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I1216 19:35:45.332675 14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1216 19:35:45.332695 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
I1216 19:35:45.336411 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.337791 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.338188 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:45.338210 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.338285 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39383
I1216 19:35:45.338532 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
I1216 19:35:45.338630 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.338742 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:45.338773 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.338816 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.339029 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:45.339089 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
I1216 19:35:45.339289 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:45.339456 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.339563 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.339501 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
I1216 19:35:45.339768 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
I1216 19:35:45.339784 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.339803 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.339916 14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
I1216 19:35:45.340005 14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
I1216 19:35:45.340671 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.340783 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.341321 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.341385 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.341738 14891 main.go:141] libmachine: (addons-618388) Calling .GetState
I1216 19:35:45.342065 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45021
I1216 19:35:45.342798 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.343506 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.343525 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.344184 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.344446 14891 main.go:141] libmachine: (addons-618388) Calling .GetState
I1216 19:35:45.346151 14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
I1216 19:35:45.348242 14891 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I1216 19:35:45.349631 14891 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1216 19:35:45.349656 14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I1216 19:35:45.349678 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
I1216 19:35:45.352041 14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
I1216 19:35:45.353008 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.353390 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:45.353414 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.353638 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
I1216 19:35:45.353846 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:45.353902 14891 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I1216 19:35:45.354075 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
I1216 19:35:45.354239 14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
I1216 19:35:45.355235 14891 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I1216 19:35:45.355284 14891 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1216 19:35:45.355306 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
I1216 19:35:45.358330 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.358692 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:45.358713 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.358976 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
I1216 19:35:45.359146 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:45.359281 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
I1216 19:35:45.359383 14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
I1216 19:35:45.366276 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46423
I1216 19:35:45.366421 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45759
I1216 19:35:45.366887 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.367558 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.367578 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.367994 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.368208 14891 main.go:141] libmachine: (addons-618388) Calling .GetState
I1216 19:35:45.368887 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44253
I1216 19:35:45.369063 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.369151 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41825
I1216 19:35:45.369562 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44317
I1216 19:35:45.369651 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.369662 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.370224 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.370245 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.370353 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.370369 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.370566 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.371001 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.371015 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.371039 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.371409 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.371832 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.371881 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.372111 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.372181 14891 main.go:141] libmachine: (addons-618388) Calling .GetState
I1216 19:35:45.372849 14891 main.go:141] libmachine: (addons-618388) Calling .GetState
I1216 19:35:45.373603 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46735
I1216 19:35:45.374201 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.374221 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.374296 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.374368 14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
I1216 19:35:45.374773 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.374919 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.374931 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.374976 14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
I1216 19:35:45.375971 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.376034 14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
I1216 19:35:45.376477 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:45.376516 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:45.376734 14891 main.go:141] libmachine: (addons-618388) Calling .GetState
I1216 19:35:45.376991 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44167
I1216 19:35:45.377692 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.378257 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.378282 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.378712 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.378742 14891 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
I1216 19:35:45.378879 14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
I1216 19:35:45.378909 14891 main.go:141] libmachine: (addons-618388) Calling .GetState
I1216 19:35:45.379651 14891 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1216 19:35:45.379905 14891 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1216 19:35:45.380018 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42903
I1216 19:35:45.380661 14891 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I1216 19:35:45.380678 14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1216 19:35:45.380697 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
I1216 19:35:45.381520 14891 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
I1216 19:35:45.381527 14891 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1216 19:35:45.381552 14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1216 19:35:45.381556 14891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1216 19:35:45.381568 14891 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1216 19:35:45.381572 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
I1216 19:35:45.381587 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
I1216 19:35:45.381647 14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
I1216 19:35:45.382771 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.383385 14891 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1216 19:35:45.383511 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33761
I1216 19:35:45.384241 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.384536 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.384549 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.384809 14891 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
I1216 19:35:45.385064 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.385078 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.385933 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.386070 14891 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1216 19:35:45.386226 14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
I1216 19:35:45.386613 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.386695 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.386705 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.387199 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:45.387216 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.387222 14891 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
I1216 19:35:45.387294 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
I1216 19:35:45.387358 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:45.387375 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.387402 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:45.387413 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.387434 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
I1216 19:35:45.387479 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
I1216 19:35:45.387274 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.387736 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:45.387784 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:45.387821 14891 main.go:141] libmachine: (addons-618388) Calling .GetState
I1216 19:35:45.387853 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:45.387884 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
I1216 19:35:45.387914 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
I1216 19:35:45.388052 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
I1216 19:35:45.388103 14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
I1216 19:35:45.388420 14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
I1216 19:35:45.388594 14891 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1216 19:35:45.388737 14891 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1216 19:35:45.388751 14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1216 19:35:45.388767 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
I1216 19:35:45.388818 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39667
I1216 19:35:45.388878 14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
I1216 19:35:45.389117 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.389867 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.389949 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.390296 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.390424 14891 main.go:141] libmachine: (addons-618388) Calling .GetState
I1216 19:35:45.391094 14891 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1216 19:35:45.391354 14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
I1216 19:35:45.392024 14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
I1216 19:35:45.392316 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:45.392345 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:45.393288 14891 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1216 19:35:45.394238 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
I1216 19:35:45.394244 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:45.394257 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:45.394268 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:45.394267 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.394294 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:45.394317 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.394274 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:45.394404 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:45.394552 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
I1216 19:35:45.394613 14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
I1216 19:35:45.394624 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:45.394630 14891 main.go:141] libmachine: Making call to close connection to plugin binary
W1216 19:35:45.394691 14891 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1216 19:35:45.394885 14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
I1216 19:35:45.394978 14891 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
I1216 19:35:45.395838 14891 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1216 19:35:45.396690 14891 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I1216 19:35:45.396711 14891 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
I1216 19:35:45.396728 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
I1216 19:35:45.398749 14891 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1216 19:35:45.400002 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.400064 14891 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1216 19:35:45.400452 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:45.400470 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.400609 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
I1216 19:35:45.400771 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:45.400946 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
I1216 19:35:45.401101 14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
I1216 19:35:45.401251 14891 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1216 19:35:45.401260 14891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1216 19:35:45.401273 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
I1216 19:35:45.402159 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38113
I1216 19:35:45.402665 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.403399 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.403417 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.403729 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.404032 14891 main.go:141] libmachine: (addons-618388) Calling .GetState
I1216 19:35:45.407360 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
I1216 19:35:45.407388 14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
I1216 19:35:45.407407 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.407421 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:45.407439 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.407582 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:45.407697 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
I1216 19:35:45.407831 14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
I1216 19:35:45.409274 14891 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1216 19:35:45.409607 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43099
I1216 19:35:45.410041 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:45.410529 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:45.410542 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:45.410887 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:45.411054 14891 main.go:141] libmachine: (addons-618388) Calling .GetState
I1216 19:35:45.412018 14891 out.go:177] - Using image docker.io/busybox:stable
I1216 19:35:45.412705 14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
I1216 19:35:45.412897 14891 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I1216 19:35:45.412938 14891 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1216 19:35:45.412969 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
I1216 19:35:45.413991 14891 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1216 19:35:45.414011 14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1216 19:35:45.414030 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
I1216 19:35:45.416891 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.417201 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:45.417237 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.417281 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.417351 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
I1216 19:35:45.417560 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:45.417739 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:45.417764 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:45.417745 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
I1216 19:35:45.417832 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
I1216 19:35:45.417931 14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
I1216 19:35:45.417963 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:45.418074 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
I1216 19:35:45.418174 14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
W1216 19:35:45.421578 14891 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:43616->192.168.39.82:22: read: connection reset by peer
I1216 19:35:45.421609 14891 retry.go:31] will retry after 278.37327ms: ssh: handshake failed: read tcp 192.168.39.1:43616->192.168.39.82:22: read: connection reset by peer
I1216 19:35:45.741436 14891 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1216 19:35:45.741613 14891 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1216 19:35:45.754977 14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1216 19:35:45.796049 14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1216 19:35:45.796351 14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1216 19:35:45.802207 14891 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1216 19:35:45.802232 14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1216 19:35:45.881326 14891 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I1216 19:35:45.881347 14891 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1216 19:35:45.918485 14891 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I1216 19:35:45.918517 14891 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1216 19:35:45.927120 14891 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1216 19:35:45.927150 14891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1216 19:35:45.951922 14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1216 19:35:45.954107 14891 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1216 19:35:45.954132 14891 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1216 19:35:45.964446 14891 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
I1216 19:35:45.964469 14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
I1216 19:35:45.991496 14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1216 19:35:46.010176 14891 node_ready.go:35] waiting up to 6m0s for node "addons-618388" to be "Ready" ...
I1216 19:35:46.026897 14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1216 19:35:46.034081 14891 node_ready.go:49] node "addons-618388" has status "Ready":"True"
I1216 19:35:46.034112 14891 node_ready.go:38] duration metric: took 23.902784ms for node "addons-618388" to be "Ready" ...
I1216 19:35:46.034127 14891 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1216 19:35:46.062622 14891 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-jqhz4" in "kube-system" namespace to be "Ready" ...
I1216 19:35:46.149755 14891 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1216 19:35:46.149794 14891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1216 19:35:46.153498 14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1216 19:35:46.165027 14891 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1216 19:35:46.165050 14891 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1216 19:35:46.233019 14891 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I1216 19:35:46.233044 14891 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1216 19:35:46.244692 14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1216 19:35:46.265574 14891 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I1216 19:35:46.265605 14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1216 19:35:46.294298 14891 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1216 19:35:46.294324 14891 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1216 19:35:46.302149 14891 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1216 19:35:46.302174 14891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1216 19:35:46.369522 14891 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1216 19:35:46.369549 14891 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1216 19:35:46.400466 14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1216 19:35:46.418920 14891 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I1216 19:35:46.418954 14891 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1216 19:35:46.516855 14891 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1216 19:35:46.516883 14891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1216 19:35:46.586798 14891 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1216 19:35:46.586826 14891 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1216 19:35:46.592487 14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1216 19:35:46.654821 14891 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I1216 19:35:46.654844 14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1216 19:35:46.672171 14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1216 19:35:46.782627 14891 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1216 19:35:46.782659 14891 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1216 19:35:46.792217 14891 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1216 19:35:46.792261 14891 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1216 19:35:46.925990 14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1216 19:35:46.963050 14891 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1216 19:35:46.963081 14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1216 19:35:46.975101 14891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1216 19:35:46.975123 14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1216 19:35:47.076600 14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1216 19:35:47.285758 14891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1216 19:35:47.285793 14891 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1216 19:35:47.616242 14891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1216 19:35:47.616273 14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1216 19:35:47.966049 14891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1216 19:35:47.966079 14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1216 19:35:48.071985 14891 pod_ready.go:103] pod "coredns-668d6bf9bc-jqhz4" in "kube-system" namespace has status "Ready":"False"
I1216 19:35:48.350403 14891 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1216 19:35:48.350428 14891 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1216 19:35:48.574529 14891 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.32.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.832886184s)
I1216 19:35:48.574563 14891 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1216 19:35:48.574579 14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.819563385s)
I1216 19:35:48.574630 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:48.574644 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:48.574944 14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
I1216 19:35:48.575029 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:48.575042 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:48.575050 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:48.575058 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:48.575290 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:48.575309 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:48.751882 14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1216 19:35:49.089711 14891 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-618388" context rescaled to 1 replicas
I1216 19:35:49.496650 14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.700560289s)
I1216 19:35:49.496706 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:49.496670 14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.700294926s)
I1216 19:35:49.496762 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:49.496717 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:49.496837 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:49.497148 14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
I1216 19:35:49.497187 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:49.497195 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:49.497193 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:49.497203 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:49.497206 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:49.497214 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:49.497220 14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
I1216 19:35:49.497233 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:49.497222 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:49.497564 14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
I1216 19:35:49.497622 14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
I1216 19:35:49.497629 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:49.497645 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:49.497664 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:49.497675 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:50.072928 14891 pod_ready.go:103] pod "coredns-668d6bf9bc-jqhz4" in "kube-system" namespace has status "Ready":"False"
I1216 19:35:50.597435 14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.645473122s)
I1216 19:35:50.597485 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:50.597497 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:50.597751 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:50.597772 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:50.597781 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:50.597790 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:50.598056 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:50.598076 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:52.135522 14891 pod_ready.go:103] pod "coredns-668d6bf9bc-jqhz4" in "kube-system" namespace has status "Ready":"False"
I1216 19:35:52.244753 14891 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1216 19:35:52.244798 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
I1216 19:35:52.247895 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:52.248316 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:52.248347 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:52.248510 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
I1216 19:35:52.248725 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:52.248900 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
I1216 19:35:52.249046 14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
I1216 19:35:52.858684 14891 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1216 19:35:52.986747 14891 addons.go:234] Setting addon gcp-auth=true in "addons-618388"
I1216 19:35:52.986811 14891 host.go:66] Checking if "addons-618388" exists ...
I1216 19:35:52.987279 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:52.987323 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:53.003709 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33267
I1216 19:35:53.004251 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:53.004816 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:53.004843 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:53.005171 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:53.005629 14891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1216 19:35:53.005655 14891 main.go:141] libmachine: Launching plugin server for driver kvm2
I1216 19:35:53.021457 14891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40217
I1216 19:35:53.021913 14891 main.go:141] libmachine: () Calling .GetVersion
I1216 19:35:53.022423 14891 main.go:141] libmachine: Using API Version 1
I1216 19:35:53.022446 14891 main.go:141] libmachine: () Calling .SetConfigRaw
I1216 19:35:53.022746 14891 main.go:141] libmachine: () Calling .GetMachineName
I1216 19:35:53.022946 14891 main.go:141] libmachine: (addons-618388) Calling .GetState
I1216 19:35:53.024670 14891 main.go:141] libmachine: (addons-618388) Calling .DriverName
I1216 19:35:53.024914 14891 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1216 19:35:53.024941 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHHostname
I1216 19:35:53.028126 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:53.028571 14891 main.go:141] libmachine: (addons-618388) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:31:2c", ip: ""} in network mk-addons-618388: {Iface:virbr1 ExpiryTime:2024-12-16 20:35:13 +0000 UTC Type:0 Mac:52:54:00:3b:31:2c Iaid: IPaddr:192.168.39.82 Prefix:24 Hostname:addons-618388 Clientid:01:52:54:00:3b:31:2c}
I1216 19:35:53.028599 14891 main.go:141] libmachine: (addons-618388) DBG | domain addons-618388 has defined IP address 192.168.39.82 and MAC address 52:54:00:3b:31:2c in network mk-addons-618388
I1216 19:35:53.028712 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHPort
I1216 19:35:53.028899 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHKeyPath
I1216 19:35:53.029080 14891 main.go:141] libmachine: (addons-618388) Calling .GetSSHUsername
I1216 19:35:53.029270 14891 sshutil.go:53] new ssh client: &{IP:192.168.39.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20091-7083/.minikube/machines/addons-618388/id_rsa Username:docker}
I1216 19:35:53.633152 14891 pod_ready.go:93] pod "coredns-668d6bf9bc-jqhz4" in "kube-system" namespace has status "Ready":"True"
I1216 19:35:53.633185 14891 pod_ready.go:82] duration metric: took 7.570536299s for pod "coredns-668d6bf9bc-jqhz4" in "kube-system" namespace to be "Ready" ...
I1216 19:35:53.633200 14891 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-tf9ml" in "kube-system" namespace to be "Ready" ...
I1216 19:35:54.876398 14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.884865187s)
I1216 19:35:54.876456 14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.849525386s)
I1216 19:35:54.876495 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:54.876507 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:54.876529 14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.723003936s)
I1216 19:35:54.876463 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:54.876560 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:54.876566 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:54.876569 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:54.876632 14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (8.631910629s)
I1216 19:35:54.876652 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:54.876660 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:54.876682 14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.476186454s)
I1216 19:35:54.876706 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:54.876721 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:54.876756 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:54.876765 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:54.876772 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:54.876778 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:54.876778 14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
I1216 19:35:54.876840 14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
I1216 19:35:54.876857 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:54.876862 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:54.876870 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:54.876875 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:54.876917 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:54.876923 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:54.876930 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:54.876948 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:54.876986 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:54.876997 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:54.877006 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:54.877014 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:54.877012 14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
I1216 19:35:54.877079 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:54.877087 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:54.877108 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:54.877117 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:54.877194 14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
I1216 19:35:54.877218 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:54.877225 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:54.877235 14891 addons.go:475] Verifying addon ingress=true in "addons-618388"
I1216 19:35:54.878916 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:54.878942 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:54.878962 14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
I1216 19:35:54.879030 14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.28650748s)
I1216 19:35:54.879068 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:54.879080 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:54.879126 14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.206927167s)
I1216 19:35:54.879144 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:54.879154 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:54.879192 14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
I1216 19:35:54.879208 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:54.879207 14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.953183187s)
I1216 19:35:54.879217 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:54.879227 14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
I1216 19:35:54.879230 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:54.879260 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:54.879293 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:54.879302 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:54.879309 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:54.879316 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:54.879355 14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.802721502s)
W1216 19:35:54.879383 14891 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1216 19:35:54.879406 14891 retry.go:31] will retry after 198.781214ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1216 19:35:54.879438 14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
I1216 19:35:54.879459 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:54.879466 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:54.879482 14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
I1216 19:35:54.879489 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:54.879493 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:54.879495 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:54.879501 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:54.879503 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:54.879512 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:54.879546 14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
I1216 19:35:54.880922 14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
I1216 19:35:54.880937 14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
I1216 19:35:54.880962 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:54.880966 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:54.880970 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:54.880973 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:54.881146 14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
I1216 19:35:54.881177 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:54.881184 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:54.881192 14891 addons.go:475] Verifying addon metrics-server=true in "addons-618388"
I1216 19:35:54.882338 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:54.882351 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:54.882530 14891 out.go:177] * Verifying ingress addon...
I1216 19:35:54.882539 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:54.882550 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:54.882559 14891 addons.go:475] Verifying addon registry=true in "addons-618388"
I1216 19:35:54.883566 14891 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-618388 service yakd-dashboard -n yakd-dashboard
I1216 19:35:54.884456 14891 out.go:177] * Verifying registry addon...
I1216 19:35:54.885480 14891 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1216 19:35:54.887072 14891 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1216 19:35:54.913771 14891 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1216 19:35:54.913795 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:35:54.919312 14891 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1216 19:35:54.919342 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:35:54.931699 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:54.931725 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:54.931832 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:54.931852 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:54.932070 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:54.932128 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:54.932157 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:54.932173 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:54.932190 14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
W1216 19:35:54.932211 14891 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I1216 19:35:55.079067 14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1216 19:35:55.394453 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:35:55.394471 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:35:55.653170 14891 pod_ready.go:103] pod "coredns-668d6bf9bc-tf9ml" in "kube-system" namespace has status "Ready":"False"
I1216 19:35:55.920441 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:35:55.946986 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:35:56.151678 14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.399740809s)
I1216 19:35:56.151744 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:56.151762 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:56.151760 14891 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.126802385s)
I1216 19:35:56.152023 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:56.152070 14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
I1216 19:35:56.152078 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:56.152094 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:56.152101 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:56.152358 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:56.152376 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:56.152387 14891 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-618388"
I1216 19:35:56.154109 14891 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
I1216 19:35:56.155166 14891 out.go:177] * Verifying csi-hostpath-driver addon...
I1216 19:35:56.156897 14891 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1216 19:35:56.157613 14891 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1216 19:35:56.158095 14891 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1216 19:35:56.158114 14891 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1216 19:35:56.221207 14891 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1216 19:35:56.221235 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:35:56.330703 14891 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1216 19:35:56.330726 14891 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1216 19:35:56.416259 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:35:56.416985 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:35:56.446561 14891 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1216 19:35:56.446591 14891 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1216 19:35:56.628911 14891 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1216 19:35:56.666666 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:35:56.890676 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:35:56.892748 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:35:57.161936 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:35:57.390384 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:35:57.390520 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:35:57.603624 14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.524506001s)
I1216 19:35:57.603689 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:57.603700 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:57.603942 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:57.603965 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:57.603977 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:57.603986 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:57.603990 14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
I1216 19:35:57.604218 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:57.604234 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:57.662900 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:35:57.889993 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:35:57.890314 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:35:58.170660 14891 pod_ready.go:103] pod "coredns-668d6bf9bc-tf9ml" in "kube-system" namespace has status "Ready":"False"
I1216 19:35:58.176826 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:35:58.397562 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:35:58.418425 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:35:58.612884 14891 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.983928357s)
I1216 19:35:58.612949 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:58.612968 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:58.613359 14891 main.go:141] libmachine: (addons-618388) DBG | Closing plugin on server side
I1216 19:35:58.613362 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:58.613394 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:58.613411 14891 main.go:141] libmachine: Making call to close driver server
I1216 19:35:58.613424 14891 main.go:141] libmachine: (addons-618388) Calling .Close
I1216 19:35:58.613632 14891 main.go:141] libmachine: Successfully made call to close driver server
I1216 19:35:58.613646 14891 main.go:141] libmachine: Making call to close connection to plugin binary
I1216 19:35:58.614773 14891 addons.go:475] Verifying addon gcp-auth=true in "addons-618388"
I1216 19:35:58.616512 14891 out.go:177] * Verifying gcp-auth addon...
I1216 19:35:58.618420 14891 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1216 19:35:58.631884 14891 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1216 19:35:58.631903 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:35:58.737791 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:35:58.893170 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:35:58.894761 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:35:59.122563 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:35:59.139221 14891 pod_ready.go:98] pod "coredns-668d6bf9bc-tf9ml" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 19:35:58 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 19:35:45 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 19:35:45 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 19:35:45 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 19:35:45 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.82 HostIPs:[{IP:192.168.39.
82}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-12-16 19:35:45 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-12-16 19:35:51 +0000 UTC,FinishedAt:2024-12-16 19:35:57 +0000 UTC,ContainerID:cri-o://bf92518ad21c8f1d35a45b7087078c9626af6ad30a3caacf3e0448ed04bf3ef6,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://bf92518ad21c8f1d35a45b7087078c9626af6ad30a3caacf3e0448ed04bf3ef6 Started:0xc0007a1920 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00069e220} {Name:kube-api-access-84tkx MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc00069e250}] User:nil
AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
I1216 19:35:59.139263 14891 pod_ready.go:82] duration metric: took 5.506055119s for pod "coredns-668d6bf9bc-tf9ml" in "kube-system" namespace to be "Ready" ...
E1216 19:35:59.139274 14891 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-tf9ml" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 19:35:58 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 19:35:45 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 19:35:45 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 19:35:45 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-16 19:35:45 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.82 HostIPs:[{IP:192.168.39.82}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-12-16 19:35:45 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-12-16 19:35:51 +0000 UTC,FinishedAt:2024-12-16 19:35:57 +0000 UTC,ContainerID:cri-o://bf92518ad21c8f1d35a45b7087078c9626af6ad30a3caacf3e0448ed04bf3ef6,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://bf92518ad21c8f1d35a45b7087078c9626af6ad30a3caacf3e0448ed04bf3ef6 Started:0xc0007a1920 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00069e220} {Name:kube-api-access-84tkx MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRead
Only:0xc00069e250}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
I1216 19:35:59.139284 14891 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-618388" in "kube-system" namespace to be "Ready" ...
I1216 19:35:59.145293 14891 pod_ready.go:93] pod "etcd-addons-618388" in "kube-system" namespace has status "Ready":"True"
I1216 19:35:59.145325 14891 pod_ready.go:82] duration metric: took 6.032862ms for pod "etcd-addons-618388" in "kube-system" namespace to be "Ready" ...
I1216 19:35:59.145339 14891 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-618388" in "kube-system" namespace to be "Ready" ...
I1216 19:35:59.150370 14891 pod_ready.go:93] pod "kube-apiserver-addons-618388" in "kube-system" namespace has status "Ready":"True"
I1216 19:35:59.150393 14891 pod_ready.go:82] duration metric: took 5.045573ms for pod "kube-apiserver-addons-618388" in "kube-system" namespace to be "Ready" ...
I1216 19:35:59.150405 14891 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-618388" in "kube-system" namespace to be "Ready" ...
I1216 19:35:59.155518 14891 pod_ready.go:93] pod "kube-controller-manager-addons-618388" in "kube-system" namespace has status "Ready":"True"
I1216 19:35:59.155542 14891 pod_ready.go:82] duration metric: took 5.129856ms for pod "kube-controller-manager-addons-618388" in "kube-system" namespace to be "Ready" ...
I1216 19:35:59.155554 14891 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8t666" in "kube-system" namespace to be "Ready" ...
I1216 19:35:59.160983 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:35:59.160995 14891 pod_ready.go:93] pod "kube-proxy-8t666" in "kube-system" namespace has status "Ready":"True"
I1216 19:35:59.161025 14891 pod_ready.go:82] duration metric: took 5.463312ms for pod "kube-proxy-8t666" in "kube-system" namespace to be "Ready" ...
I1216 19:35:59.161037 14891 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-618388" in "kube-system" namespace to be "Ready" ...
I1216 19:35:59.394369 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:35:59.394719 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:35:59.537077 14891 pod_ready.go:93] pod "kube-scheduler-addons-618388" in "kube-system" namespace has status "Ready":"True"
I1216 19:35:59.537103 14891 pod_ready.go:82] duration metric: took 376.029382ms for pod "kube-scheduler-addons-618388" in "kube-system" namespace to be "Ready" ...
I1216 19:35:59.537110 14891 pod_ready.go:39] duration metric: took 13.502971624s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1216 19:35:59.537126 14891 api_server.go:52] waiting for apiserver process to appear ...
I1216 19:35:59.537181 14891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1216 19:35:59.560419 14891 api_server.go:72] duration metric: took 14.342379461s to wait for apiserver process to appear ...
I1216 19:35:59.560443 14891 api_server.go:88] waiting for apiserver healthz status ...
I1216 19:35:59.560462 14891 api_server.go:253] Checking apiserver healthz at https://192.168.39.82:8443/healthz ...
I1216 19:35:59.565434 14891 api_server.go:279] https://192.168.39.82:8443/healthz returned 200:
ok
I1216 19:35:59.567343 14891 api_server.go:141] control plane version: v1.32.0
I1216 19:35:59.567377 14891 api_server.go:131] duration metric: took 6.927743ms to wait for apiserver health ...
I1216 19:35:59.567384 14891 system_pods.go:43] waiting for kube-system pods to appear ...
I1216 19:35:59.622774 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:35:59.662392 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:35:59.741680 14891 system_pods.go:59] 18 kube-system pods found
I1216 19:35:59.741714 14891 system_pods.go:61] "amd-gpu-device-plugin-t9xls" [998af96b-a6d5-438c-8ffb-97b11028796f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1216 19:35:59.741721 14891 system_pods.go:61] "coredns-668d6bf9bc-jqhz4" [1d168f2c-2593-4ee9-a909-ced7e32adca5] Running
I1216 19:35:59.741728 14891 system_pods.go:61] "csi-hostpath-attacher-0" [a6ff89b4-0d31-4e72-826a-12cf756c7e4c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1216 19:35:59.741734 14891 system_pods.go:61] "csi-hostpath-resizer-0" [7c08e8c6-a4d2-48d1-8641-fce068dbafa2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1216 19:35:59.741742 14891 system_pods.go:61] "csi-hostpathplugin-fmz2d" [c682dd96-c52d-4c59-8b61-6fb5e8f9027a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1216 19:35:59.741746 14891 system_pods.go:61] "etcd-addons-618388" [5e5b4607-bc43-46f6-b1e1-2c096e3f4431] Running
I1216 19:35:59.741751 14891 system_pods.go:61] "kube-apiserver-addons-618388" [6f76d8bc-1a39-45dc-b974-21776046dccf] Running
I1216 19:35:59.741754 14891 system_pods.go:61] "kube-controller-manager-addons-618388" [89b9f73d-e0dd-4958-b78d-eec172386bc6] Running
I1216 19:35:59.741759 14891 system_pods.go:61] "kube-ingress-dns-minikube" [913a8e1d-d56f-4b34-89b0-afa60ef45d1a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1216 19:35:59.741764 14891 system_pods.go:61] "kube-proxy-8t666" [397ca8ee-6184-4c67-9cc2-df6a118f9ec7] Running
I1216 19:35:59.741768 14891 system_pods.go:61] "kube-scheduler-addons-618388" [26b2db05-10ed-42f8-96f7-3345931f70a9] Running
I1216 19:35:59.741774 14891 system_pods.go:61] "metrics-server-7fbb699795-c995d" [4213f921-b992-420b-bd80-e0ad67a43567] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1216 19:35:59.741780 14891 system_pods.go:61] "nvidia-device-plugin-daemonset-fmpb4" [e8d4bb90-d999-45bf-96e0-304cf36a3790] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1216 19:35:59.741786 14891 system_pods.go:61] "registry-6c86875c6f-lxvbn" [ec5514ad-5010-4fd5-bae5-fa96610b47b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1216 19:35:59.741793 14891 system_pods.go:61] "registry-proxy-49ln5" [29c16cb5-dd77-4e42-a748-3d4a7a80fb9c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1216 19:35:59.741803 14891 system_pods.go:61] "snapshot-controller-68b874b76f-dzm7s" [4a9bc6bd-7ed3-4b60-9f26-33fb55f94e9e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1216 19:35:59.741809 14891 system_pods.go:61] "snapshot-controller-68b874b76f-qp7nw" [c8817fea-96d6-4405-8c50-674c5e47b8c7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1216 19:35:59.741813 14891 system_pods.go:61] "storage-provisioner" [8df30b29-628b-40a9-85a1-0a2edb5357ab] Running
I1216 19:35:59.741820 14891 system_pods.go:74] duration metric: took 174.430048ms to wait for pod list to return data ...
I1216 19:35:59.741830 14891 default_sa.go:34] waiting for default service account to be created ...
I1216 19:35:59.889133 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:35:59.890625 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:35:59.937013 14891 default_sa.go:45] found service account: "default"
I1216 19:35:59.937039 14891 default_sa.go:55] duration metric: took 195.20084ms for default service account to be created ...
I1216 19:35:59.937047 14891 system_pods.go:116] waiting for k8s-apps to be running ...
I1216 19:36:00.122252 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:00.141373 14891 system_pods.go:86] 18 kube-system pods found
I1216 19:36:00.141413 14891 system_pods.go:89] "amd-gpu-device-plugin-t9xls" [998af96b-a6d5-438c-8ffb-97b11028796f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1216 19:36:00.141422 14891 system_pods.go:89] "coredns-668d6bf9bc-jqhz4" [1d168f2c-2593-4ee9-a909-ced7e32adca5] Running
I1216 19:36:00.141433 14891 system_pods.go:89] "csi-hostpath-attacher-0" [a6ff89b4-0d31-4e72-826a-12cf756c7e4c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1216 19:36:00.141442 14891 system_pods.go:89] "csi-hostpath-resizer-0" [7c08e8c6-a4d2-48d1-8641-fce068dbafa2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1216 19:36:00.141474 14891 system_pods.go:89] "csi-hostpathplugin-fmz2d" [c682dd96-c52d-4c59-8b61-6fb5e8f9027a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1216 19:36:00.141484 14891 system_pods.go:89] "etcd-addons-618388" [5e5b4607-bc43-46f6-b1e1-2c096e3f4431] Running
I1216 19:36:00.141493 14891 system_pods.go:89] "kube-apiserver-addons-618388" [6f76d8bc-1a39-45dc-b974-21776046dccf] Running
I1216 19:36:00.141508 14891 system_pods.go:89] "kube-controller-manager-addons-618388" [89b9f73d-e0dd-4958-b78d-eec172386bc6] Running
I1216 19:36:00.141516 14891 system_pods.go:89] "kube-ingress-dns-minikube" [913a8e1d-d56f-4b34-89b0-afa60ef45d1a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1216 19:36:00.141522 14891 system_pods.go:89] "kube-proxy-8t666" [397ca8ee-6184-4c67-9cc2-df6a118f9ec7] Running
I1216 19:36:00.141529 14891 system_pods.go:89] "kube-scheduler-addons-618388" [26b2db05-10ed-42f8-96f7-3345931f70a9] Running
I1216 19:36:00.141539 14891 system_pods.go:89] "metrics-server-7fbb699795-c995d" [4213f921-b992-420b-bd80-e0ad67a43567] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1216 19:36:00.141554 14891 system_pods.go:89] "nvidia-device-plugin-daemonset-fmpb4" [e8d4bb90-d999-45bf-96e0-304cf36a3790] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1216 19:36:00.141567 14891 system_pods.go:89] "registry-6c86875c6f-lxvbn" [ec5514ad-5010-4fd5-bae5-fa96610b47b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1216 19:36:00.141576 14891 system_pods.go:89] "registry-proxy-49ln5" [29c16cb5-dd77-4e42-a748-3d4a7a80fb9c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1216 19:36:00.141588 14891 system_pods.go:89] "snapshot-controller-68b874b76f-dzm7s" [4a9bc6bd-7ed3-4b60-9f26-33fb55f94e9e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1216 19:36:00.141601 14891 system_pods.go:89] "snapshot-controller-68b874b76f-qp7nw" [c8817fea-96d6-4405-8c50-674c5e47b8c7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1216 19:36:00.141608 14891 system_pods.go:89] "storage-provisioner" [8df30b29-628b-40a9-85a1-0a2edb5357ab] Running
I1216 19:36:00.141623 14891 system_pods.go:126] duration metric: took 204.568553ms to wait for k8s-apps to be running ...
I1216 19:36:00.141636 14891 system_svc.go:44] waiting for kubelet service to be running ....
I1216 19:36:00.141689 14891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1216 19:36:00.162881 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:00.192172 14891 system_svc.go:56] duration metric: took 50.528142ms WaitForService to wait for kubelet
I1216 19:36:00.192198 14891 kubeadm.go:582] duration metric: took 14.974162621s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1216 19:36:00.192217 14891 node_conditions.go:102] verifying NodePressure condition ...
I1216 19:36:00.348810 14891 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1216 19:36:00.348842 14891 node_conditions.go:123] node cpu capacity is 2
I1216 19:36:00.348853 14891 node_conditions.go:105] duration metric: took 156.630414ms to run NodePressure ...
I1216 19:36:00.348865 14891 start.go:241] waiting for startup goroutines ...
I1216 19:36:00.390768 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:00.391428 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:00.622424 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:00.662096 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:00.891761 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:00.891910 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:01.122067 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:01.161609 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:01.389753 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:01.391529 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:01.622878 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:01.661776 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:01.894410 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:01.895235 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:02.122686 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:02.162346 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:02.390099 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:02.391269 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:02.623068 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:02.661905 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:02.890109 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:02.892266 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:03.122338 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:03.162243 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:03.389955 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:03.391023 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:03.622265 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:03.724776 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:03.891464 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:03.891750 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:04.122664 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:04.164370 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:04.389479 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:04.391363 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:04.623153 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:04.662098 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:04.891520 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:04.891761 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:05.122682 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:05.162325 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:05.393158 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:05.393385 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:05.624924 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:05.661744 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:05.890412 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:05.891530 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:06.122800 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:06.163195 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:06.391796 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:06.392361 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:06.623200 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:06.663092 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:06.889696 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:06.892054 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:07.122888 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:07.163116 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:07.391691 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:07.392118 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:07.621614 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:07.662996 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:07.890959 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:07.891838 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:08.121527 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:08.162600 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:08.390260 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:08.390747 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:08.623050 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:08.661867 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:08.889805 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:08.892438 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:09.122622 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:09.163733 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:09.393027 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:09.393817 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:09.926093 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:09.926212 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:09.926481 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:09.927041 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:10.122661 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:10.162944 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:10.391125 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:10.392602 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:10.622453 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:10.671480 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:10.889497 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:10.890978 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:11.121660 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:11.162774 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:11.390506 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:11.391730 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:11.621586 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:11.662409 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:11.890761 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:11.890981 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:12.284262 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:12.285568 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:12.393567 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:12.395963 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:12.621851 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:12.662953 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:12.890845 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:12.891667 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:13.123057 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:13.162330 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:13.390772 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:13.392100 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:13.622496 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:13.662961 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:13.890239 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:13.894222 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:14.122541 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:14.163432 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:14.529204 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:14.529760 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:14.627982 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:14.662320 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:14.889278 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:14.890592 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:15.123785 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:15.164145 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:15.390904 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:15.391099 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:15.621966 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:15.661981 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:15.890605 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:15.891141 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:16.122728 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:16.163770 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:16.390780 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:16.391217 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:16.720449 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:16.720925 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:16.890888 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:16.891478 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:17.122467 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:17.162674 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:17.392130 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:17.392709 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:17.622200 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:17.662531 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:17.892075 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:17.892534 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:18.122653 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:18.162984 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:18.391307 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:18.392335 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:18.621981 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:18.662094 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:18.891374 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:18.891973 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:19.126668 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:19.164969 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:19.391719 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:19.392178 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:19.621878 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:19.664281 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:19.889761 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:19.892171 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:20.122123 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:20.223709 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:20.390868 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:20.391312 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:20.622601 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:20.663621 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:20.890977 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:20.891712 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:21.122716 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:21.162801 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:21.389760 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:21.391447 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:21.624305 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:21.662128 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:21.890354 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:21.891357 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:22.122063 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:22.162765 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:22.434216 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:22.434427 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:22.621985 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:22.662084 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:22.893960 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:22.991368 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:23.122598 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:23.162627 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:23.390580 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:23.392806 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:23.621592 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:23.662530 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:23.890194 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:23.892116 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:24.122979 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:24.163189 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:24.391328 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:24.392874 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:24.621765 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:24.663393 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:24.892827 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:24.893475 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:25.123555 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:25.167902 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:25.390847 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:25.391007 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:25.622210 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:25.661897 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:25.892554 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:25.892768 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:26.123346 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:26.161959 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:26.390428 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:26.392200 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:26.621882 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:26.662059 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:26.890443 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:26.892305 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:27.133091 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:27.164511 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:27.390661 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:27.393336 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:27.622494 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:27.662384 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:27.907581 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:27.908097 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:28.123139 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:28.163482 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:28.390241 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:28.393531 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:28.623784 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:28.662657 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:28.891476 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:28.891857 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:29.122858 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:29.163695 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:29.392193 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:29.392777 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:29.622418 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:29.664379 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:29.892472 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:29.892762 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:30.122198 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:30.162576 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:30.390267 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:30.391353 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:30.623079 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:30.663391 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:30.890199 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:30.891347 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:31.121724 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:31.161785 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:31.391698 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:31.392422 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:31.622270 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:31.662075 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:31.889698 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:31.890978 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:32.122724 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:32.162858 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:32.390276 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:32.392883 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:32.622815 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:32.664851 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:33.030774 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:33.032086 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:33.152388 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:33.167449 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:33.392006 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:33.392621 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:33.622487 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:33.662806 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:33.890173 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:33.892200 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:34.122091 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:34.162631 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:34.391267 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:34.392024 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:34.622356 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:34.663443 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:34.891897 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:34.893835 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:35.122332 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:35.166239 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:35.390869 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:35.392074 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:35.621828 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:35.664597 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:35.892337 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:35.897988 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:36.122592 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:36.164635 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:36.389381 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:36.390997 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:36.621947 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:36.663598 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:36.891558 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:36.895307 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:37.122981 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:37.162049 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:37.390566 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:37.391933 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:37.621701 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:37.662812 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:37.890271 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:37.890279 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:38.122818 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:38.162767 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:38.391174 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:38.391174 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:38.622733 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:38.663633 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:38.891438 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:38.891566 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:39.122640 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:39.162784 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:39.478504 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:39.478622 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:39.624178 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:39.724855 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:39.890392 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:39.891524 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:40.121732 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:40.166528 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:40.401465 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:40.401469 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:40.623088 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:40.662363 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:40.891217 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:40.892110 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:41.121697 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:41.162860 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:41.390546 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:41.393038 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:41.622012 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:41.662205 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:41.892282 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:41.894263 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:42.123360 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:42.162295 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:42.391233 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:42.391993 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:42.623542 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:42.724906 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:42.891286 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:42.891618 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1216 19:36:43.122964 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:43.162484 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:43.394467 14891 kapi.go:107] duration metric: took 48.507351034s to wait for kubernetes.io/minikube-addons=registry ...
I1216 19:36:43.394578 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:43.623016 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:43.662226 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:43.889564 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:44.122471 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:44.162662 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:44.390929 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:44.622649 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:44.662440 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:44.889332 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:45.121963 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:45.161893 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:45.389632 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:45.622790 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:45.662854 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:45.890229 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:46.122528 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:46.162752 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:46.390422 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:46.622557 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:46.663097 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:46.889957 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:47.122083 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:47.162387 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:47.390396 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:47.624476 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:47.662328 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:47.891612 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:48.122610 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:48.162972 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:48.391286 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:48.623090 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:48.662175 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:48.889361 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:49.122787 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:49.162754 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:49.390059 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:49.622530 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:49.794685 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:49.894843 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:50.123037 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:50.224978 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:50.391336 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:50.623309 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:50.662379 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:50.890426 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:51.121987 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:51.162146 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:51.402889 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:51.622818 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:51.665739 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:51.890496 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:52.123294 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:52.225394 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:52.389577 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:52.622070 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:52.662794 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:52.890189 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:53.121703 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:53.163482 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:53.390306 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:53.623958 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:53.662395 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:53.890450 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:54.129972 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:54.237375 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:54.389828 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:54.622885 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:54.662666 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:54.890276 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:55.124818 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:55.163228 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:55.390841 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:55.622328 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:55.665311 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:55.890586 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:56.123512 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:56.162132 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:56.390183 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:56.622617 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:56.663979 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:56.890154 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:57.128701 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:57.163574 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:57.390136 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:57.623092 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:57.662086 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:57.891132 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:58.122077 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:58.162083 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:58.390403 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:58.622513 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:58.662879 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:58.890759 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:59.122178 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:59.164506 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:59.398537 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:36:59.626687 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:36:59.740877 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:36:59.894055 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:37:00.124268 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:37:00.226057 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:37:00.392111 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:37:00.621793 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:37:00.662869 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:37:00.890644 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:37:01.123542 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:37:01.163151 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:37:01.391049 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:37:01.621631 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:37:01.663074 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:37:01.891007 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:37:02.122547 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:37:02.163307 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:37:02.389687 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:37:02.623756 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:37:02.663319 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:37:02.891647 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:37:03.122443 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:37:03.162700 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:37:03.390031 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:37:03.874029 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:37:03.875337 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:37:03.890107 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:37:04.122016 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:37:04.162425 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:37:04.390353 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:37:04.621504 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:37:04.662458 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:37:04.890460 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:37:05.121997 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:37:05.162890 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:37:05.390593 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:37:05.622582 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:37:05.662850 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:37:05.890118 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:37:06.122736 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:37:06.162893 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:37:06.391529 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:37:06.622355 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:37:06.662403 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:37:06.891677 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:37:07.123171 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:37:07.162220 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:37:07.389800 14891 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1216 19:37:07.630395 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:37:07.664404 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:37:07.890827 14891 kapi.go:107] duration metric: took 1m13.005342009s to wait for app.kubernetes.io/name=ingress-nginx ...
I1216 19:37:08.123366 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:37:08.163060 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:37:08.627418 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:37:08.729376 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:37:09.122663 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:37:09.162691 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:37:09.622068 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:37:09.663092 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:37:10.123027 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:37:10.224731 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:37:10.623229 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1216 19:37:10.671272 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:37:11.122165 14891 kapi.go:107] duration metric: took 1m12.503744021s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1216 19:37:11.124404 14891 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-618388 cluster.
I1216 19:37:11.125969 14891 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1216 19:37:11.127457 14891 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1216 19:37:11.162118 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:37:11.662394 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:37:12.162104 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:37:12.670233 14891 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1216 19:37:13.162388 14891 kapi.go:107] duration metric: took 1m17.004772258s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1216 19:37:13.164371 14891 out.go:177] * Enabled addons: amd-gpu-device-plugin, ingress-dns, nvidia-device-plugin, storage-provisioner, cloud-spanner, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
I1216 19:37:13.165865 14891 addons.go:510] duration metric: took 1m27.947743244s for enable addons: enabled=[amd-gpu-device-plugin ingress-dns nvidia-device-plugin storage-provisioner cloud-spanner metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
I1216 19:37:13.165905 14891 start.go:246] waiting for cluster config update ...
I1216 19:37:13.165923 14891 start.go:255] writing updated cluster config ...
I1216 19:37:13.166194 14891 ssh_runner.go:195] Run: rm -f paused
I1216 19:37:13.218386 14891 start.go:600] kubectl: 1.32.0, cluster: 1.32.0 (minor skew: 0)
I1216 19:37:13.220431 14891 out.go:177] * Done! kubectl is now configured to use "addons-618388" cluster and "default" namespace by default
==> CRI-O <==
Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.695934483Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d6a45c62-9537-4be8-97d7-cf3536788fcb name=/runtime.v1.ImageService/ImageFsInfo
Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.698129818Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734378011698097956,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595926,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6a45c62-9537-4be8-97d7-cf3536788fcb name=/runtime.v1.ImageService/ImageFsInfo
Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.698966302Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60b5d690-be24-4b3f-ab44-c4bfe7d89ebd name=/runtime.v1.RuntimeService/ListContainers
Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.699026911Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60b5d690-be24-4b3f-ab44-c4bfe7d89ebd name=/runtime.v1.RuntimeService/ListContainers
Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.699347293Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a83ce5fc3bf5d5eed034bb5e58b580fb6e83c8c250e63aa9e018497aec331259,PodSandboxId:012cfbaf366c7682320ff9b20e114008fbc8a2f619c0f2fb59d052d9c3dbab82,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1734377873939416092,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 004073d4-980e-4fd9-ad94-dc4598f84218,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:373b7d1d5c5a8decd5d0fede1509e853236e98f53519a70b5d20098e800239f5,PodSandboxId:b9d05acaaff9aa586fc1e7693ef04f13cb1b3c7d5b94c8d90ef5ca226eec4d83,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1734377835495900239,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12fe933d-2f3a-4b23-9e9d-2faa73db353b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b941a1e632cbc8f4a8a5493e67bf24cccf3be8fccf534e7fd10e567f414c58,PodSandboxId:3b7647453926fa4968304d00504647f8d88639d93cd6984c9cb917815d2d59a6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1734377826834387420,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-rtb85,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 918046a9-d03b-4717-8083-f1055bb8fa1e,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0389c8ef56a302abbd165e6d8c2aba1a54ed92bebb9e68df36c196e48f70b39a,PodSandboxId:ae114d6dcd6a359f99ccef2a0284d2896b6901240f689bb95babf1ae940d0ae9,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1734377812564024360,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lsm5p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 29c0b712-49ac-4316-b9d3-f602609b2309,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd2e36bd4d4598dd05712cdd0088e0da4a6a77814baac4e4b508a8fb57c5f9c4,PodSandboxId:784dd69c8e3614ccd13d710bd822fd1bc48e42ecaa5d7e4d0d5e1dfab67dd2f5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1734377812020186227,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sgp7s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 927e60ad-2b4f-4cb6-9f3d-fdc73a5b0b8d,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eac4f297a677bfe4a8426be0287541519521ff1fa4c953c559bb2a61cffe7c51,PodSandboxId:5347fbb80356ab6d733eb639850a88c731f4baa8dc7b8d431ca720fbd038b347,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1734377804295686677,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-f8t2h,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 01196ec1-9fa0-47a9-813e-64cd0afae7da,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c876f3d6402bd3090160248e2e7a745c32eb7f2c70463e5b2183aee03dc9e785,PodSandboxId:bb9c50c2e335b21fdaaae95166fe59e183555481160e0c1bcd0edf2859d4be8c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1734377778678659314,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-t9xls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 998af96b-a6d5-438c-8ffb-97b11028796f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273479a02f4bcce8b7f05f4909cde609eea1d516ef32336b7540d277526e2f1f,PodSandboxId:d7150f67485f635bcb7abfa2265b4812cbfe6f2ee89f64c3ce47c57931e0492e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d46
0978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1734377763574667739,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 913a8e1d-d56f-4b34-89b0-afa60ef45d1a,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78577479cd15e2342c6dd4796ceb214c0d74c7572c9ae96444bdb9b7e2cba77d,PodSandboxId:c454f29e8eda3f10e01cde053db6649aaf970515a4153f172acb6773cbc41242,Metadata:&ContainerMetad
ata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734377751968521223,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8df30b29-628b-40a9-85a1-0a2edb5357ab,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74ac4a483ca6e2d67d6a49be154343cc60031a6dd0b78bc7e90fd9f07bfe3db1,PodSandboxId:7fc5e2399f4850db7633adb765e5e9b8ae49a564044e08e1b9afd42ede84e911,Metadata:&ContainerMetadata{Name:cor
edns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734377750706940648,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jqhz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d168f2c-2593-4ee9-a909-ced7e32adca5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:914213cd5da43b2e02f7ae5ef1ad795000630d45fff565905982bef7c39857ba,PodSandboxId:44cbb4acdf41d8e154dff11cf0fd9ae2796c4d94720b2bf81fb095cbb19a7b6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1734377747175738840,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8t666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397ca8ee-6184-4c67-9cc2-df6a118f9ec7,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6184416c7f2245b19
769b6b6c18aff4e9bfdab07bd818fd77cc33aff5bfa0eab,PodSandboxId:9b376e0f6943772f59f81c70e4b5efdf5563397deaff1bddee775af1780d9ef3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734377735981567898,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-618388,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c652e97277b9a4e1265beba344d8e0db,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15085178c7262da01fec9432c0fc231cb4
3bf620ae6c2ccefc3eb2a726807c4a,PodSandboxId:6f85776c42f4225819f1c1cdc8ac0e8f3a93daca7328eb2f9deec4972480fc0b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734377735975328343,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-618388,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4813f902c974c98326634283e67497,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a57ad54df62563b286ad667
2f38fdde8d7b769e145a520d7f2b05cedfb36e53,PodSandboxId:2a85ba66f0bb266d60865dc63bd2d06a4a3d0527a7f4709965b125b0297a51e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734377735966808524,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-618388,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cfdf638021c8a1520d724d230dfdd84,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e31ff64b1d64f3f9a8478142fdaf8ce5bd23879d6318582530651cb287bcd456,PodSan
dboxId:c239d7fd5353d698f1c91c0cd335d3511ef509bed97a7402812a366f79448486,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734377735959166165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-618388,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7ed9c41ae63edd6868abd3c5d53735,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=60b5d690-be24-4b3f-ab44-c4bfe7d89ebd name=/runtime.v1
.RuntimeService/ListContainers
Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.736565916Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7aff5b67-95ce-47ef-9b5d-79489a2bb912 name=/runtime.v1.RuntimeService/Version
Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.736661960Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7aff5b67-95ce-47ef-9b5d-79489a2bb912 name=/runtime.v1.RuntimeService/Version
Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.737869833Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=127e0b07-5f4b-4255-80f9-9f6240521ea0 name=/runtime.v1.ImageService/ImageFsInfo
Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.739554265Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734378011739519843,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595926,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=127e0b07-5f4b-4255-80f9-9f6240521ea0 name=/runtime.v1.ImageService/ImageFsInfo
Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.740287400Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dcfefa24-57c5-4965-9241-6383fa013f00 name=/runtime.v1.RuntimeService/ListContainers
Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.740408857Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dcfefa24-57c5-4965-9241-6383fa013f00 name=/runtime.v1.RuntimeService/ListContainers
Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.740899325Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a83ce5fc3bf5d5eed034bb5e58b580fb6e83c8c250e63aa9e018497aec331259,PodSandboxId:012cfbaf366c7682320ff9b20e114008fbc8a2f619c0f2fb59d052d9c3dbab82,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1734377873939416092,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 004073d4-980e-4fd9-ad94-dc4598f84218,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:373b7d1d5c5a8decd5d0fede1509e853236e98f53519a70b5d20098e800239f5,PodSandboxId:b9d05acaaff9aa586fc1e7693ef04f13cb1b3c7d5b94c8d90ef5ca226eec4d83,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1734377835495900239,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12fe933d-2f3a-4b23-9e9d-2faa73db353b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b941a1e632cbc8f4a8a5493e67bf24cccf3be8fccf534e7fd10e567f414c58,PodSandboxId:3b7647453926fa4968304d00504647f8d88639d93cd6984c9cb917815d2d59a6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1734377826834387420,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-rtb85,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 918046a9-d03b-4717-8083-f1055bb8fa1e,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0389c8ef56a302abbd165e6d8c2aba1a54ed92bebb9e68df36c196e48f70b39a,PodSandboxId:ae114d6dcd6a359f99ccef2a0284d2896b6901240f689bb95babf1ae940d0ae9,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1734377812564024360,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lsm5p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 29c0b712-49ac-4316-b9d3-f602609b2309,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd2e36bd4d4598dd05712cdd0088e0da4a6a77814baac4e4b508a8fb57c5f9c4,PodSandboxId:784dd69c8e3614ccd13d710bd822fd1bc48e42ecaa5d7e4d0d5e1dfab67dd2f5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1734377812020186227,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sgp7s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 927e60ad-2b4f-4cb6-9f3d-fdc73a5b0b8d,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eac4f297a677bfe4a8426be0287541519521ff1fa4c953c559bb2a61cffe7c51,PodSandboxId:5347fbb80356ab6d733eb639850a88c731f4baa8dc7b8d431ca720fbd038b347,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1734377804295686677,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-f8t2h,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 01196ec1-9fa0-47a9-813e-64cd0afae7da,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c876f3d6402bd3090160248e2e7a745c32eb7f2c70463e5b2183aee03dc9e785,PodSandboxId:bb9c50c2e335b21fdaaae95166fe59e183555481160e0c1bcd0edf2859d4be8c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1734377778678659314,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-t9xls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 998af96b-a6d5-438c-8ffb-97b11028796f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273479a02f4bcce8b7f05f4909cde609eea1d516ef32336b7540d277526e2f1f,PodSandboxId:d7150f67485f635bcb7abfa2265b4812cbfe6f2ee89f64c3ce47c57931e0492e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d46
0978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1734377763574667739,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 913a8e1d-d56f-4b34-89b0-afa60ef45d1a,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78577479cd15e2342c6dd4796ceb214c0d74c7572c9ae96444bdb9b7e2cba77d,PodSandboxId:c454f29e8eda3f10e01cde053db6649aaf970515a4153f172acb6773cbc41242,Metadata:&ContainerMetad
ata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734377751968521223,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8df30b29-628b-40a9-85a1-0a2edb5357ab,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74ac4a483ca6e2d67d6a49be154343cc60031a6dd0b78bc7e90fd9f07bfe3db1,PodSandboxId:7fc5e2399f4850db7633adb765e5e9b8ae49a564044e08e1b9afd42ede84e911,Metadata:&ContainerMetadata{Name:cor
edns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734377750706940648,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jqhz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d168f2c-2593-4ee9-a909-ced7e32adca5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:914213cd5da43b2e02f7ae5ef1ad795000630d45fff565905982bef7c39857ba,PodSandboxId:44cbb4acdf41d8e154dff11cf0fd9ae2796c4d94720b2bf81fb095cbb19a7b6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1734377747175738840,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8t666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397ca8ee-6184-4c67-9cc2-df6a118f9ec7,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6184416c7f2245b19
769b6b6c18aff4e9bfdab07bd818fd77cc33aff5bfa0eab,PodSandboxId:9b376e0f6943772f59f81c70e4b5efdf5563397deaff1bddee775af1780d9ef3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734377735981567898,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-618388,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c652e97277b9a4e1265beba344d8e0db,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15085178c7262da01fec9432c0fc231cb4
3bf620ae6c2ccefc3eb2a726807c4a,PodSandboxId:6f85776c42f4225819f1c1cdc8ac0e8f3a93daca7328eb2f9deec4972480fc0b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734377735975328343,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-618388,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4813f902c974c98326634283e67497,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a57ad54df62563b286ad667
2f38fdde8d7b769e145a520d7f2b05cedfb36e53,PodSandboxId:2a85ba66f0bb266d60865dc63bd2d06a4a3d0527a7f4709965b125b0297a51e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734377735966808524,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-618388,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cfdf638021c8a1520d724d230dfdd84,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e31ff64b1d64f3f9a8478142fdaf8ce5bd23879d6318582530651cb287bcd456,PodSan
dboxId:c239d7fd5353d698f1c91c0cd335d3511ef509bed97a7402812a366f79448486,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734377735959166165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-618388,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7ed9c41ae63edd6868abd3c5d53735,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dcfefa24-57c5-4965-9241-6383fa013f00 name=/runtime.v1
.RuntimeService/ListContainers
Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.758316468Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=3c1eec1b-0710-4ac4-974b-d17cdac220c5 name=/runtime.v1.RuntimeService/Status
Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.758404678Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=3c1eec1b-0710-4ac4-974b-d17cdac220c5 name=/runtime.v1.RuntimeService/Status
Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.759000145Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.760392222Z" level=debug msg="Using SQLite blob info cache at /var/lib/containers/cache/blob-info-cache-v1.sqlite" file="blobinfocache/default.go:74"
Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.760834428Z" level=debug msg="Source is a manifest list; copying (only) instance sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 for current system" file="copy/copy.go:318"
Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.760911949Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.786909082Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1b0d1e0f-ef4f-4e97-80e7-8cd5c48da057 name=/runtime.v1.RuntimeService/Version
Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.786999112Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1b0d1e0f-ef4f-4e97-80e7-8cd5c48da057 name=/runtime.v1.RuntimeService/Version
Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.788639159Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=605e7d39-a8ae-4490-aaea-e8976b115ccc name=/runtime.v1.ImageService/ImageFsInfo
Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.790349792Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734378011790318087,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595926,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=605e7d39-a8ae-4490-aaea-e8976b115ccc name=/runtime.v1.ImageService/ImageFsInfo
Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.790920882Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4cf1d189-90e1-46b1-882f-646910919aef name=/runtime.v1.RuntimeService/ListContainers
Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.790975806Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4cf1d189-90e1-46b1-882f-646910919aef name=/runtime.v1.RuntimeService/ListContainers
Dec 16 19:40:11 addons-618388 crio[664]: time="2024-12-16 19:40:11.791310154Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a83ce5fc3bf5d5eed034bb5e58b580fb6e83c8c250e63aa9e018497aec331259,PodSandboxId:012cfbaf366c7682320ff9b20e114008fbc8a2f619c0f2fb59d052d9c3dbab82,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1734377873939416092,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 004073d4-980e-4fd9-ad94-dc4598f84218,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:373b7d1d5c5a8decd5d0fede1509e853236e98f53519a70b5d20098e800239f5,PodSandboxId:b9d05acaaff9aa586fc1e7693ef04f13cb1b3c7d5b94c8d90ef5ca226eec4d83,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1734377835495900239,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 12fe933d-2f3a-4b23-9e9d-2faa73db353b,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15b941a1e632cbc8f4a8a5493e67bf24cccf3be8fccf534e7fd10e567f414c58,PodSandboxId:3b7647453926fa4968304d00504647f8d88639d93cd6984c9cb917815d2d59a6,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1734377826834387420,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-rtb85,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 918046a9-d03b-4717-8083-f1055bb8fa1e,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:0389c8ef56a302abbd165e6d8c2aba1a54ed92bebb9e68df36c196e48f70b39a,PodSandboxId:ae114d6dcd6a359f99ccef2a0284d2896b6901240f689bb95babf1ae940d0ae9,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1734377812564024360,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-lsm5p,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 29c0b712-49ac-4316-b9d3-f602609b2309,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd2e36bd4d4598dd05712cdd0088e0da4a6a77814baac4e4b508a8fb57c5f9c4,PodSandboxId:784dd69c8e3614ccd13d710bd822fd1bc48e42ecaa5d7e4d0d5e1dfab67dd2f5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1734377812020186227,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sgp7s,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 927e60ad-2b4f-4cb6-9f3d-fdc73a5b0b8d,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eac4f297a677bfe4a8426be0287541519521ff1fa4c953c559bb2a61cffe7c51,PodSandboxId:5347fbb80356ab6d733eb639850a88c731f4baa8dc7b8d431ca720fbd038b347,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1734377804295686677,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-76f89f99b5-f8t2h,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 01196ec1-9fa0-47a9-813e-64cd0afae7da,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c876f3d6402bd3090160248e2e7a745c32eb7f2c70463e5b2183aee03dc9e785,PodSandboxId:bb9c50c2e335b21fdaaae95166fe59e183555481160e0c1bcd0edf2859d4be8c,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1734377778678659314,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-t9xls,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 998af96b-a6d5-438c-8ffb-97b11028796f,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273479a02f4bcce8b7f05f4909cde609eea1d516ef32336b7540d277526e2f1f,PodSandboxId:d7150f67485f635bcb7abfa2265b4812cbfe6f2ee89f64c3ce47c57931e0492e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d46
0978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1734377763574667739,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 913a8e1d-d56f-4b34-89b0-afa60ef45d1a,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78577479cd15e2342c6dd4796ceb214c0d74c7572c9ae96444bdb9b7e2cba77d,PodSandboxId:c454f29e8eda3f10e01cde053db6649aaf970515a4153f172acb6773cbc41242,Metadata:&ContainerMetad
ata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1734377751968521223,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8df30b29-628b-40a9-85a1-0a2edb5357ab,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74ac4a483ca6e2d67d6a49be154343cc60031a6dd0b78bc7e90fd9f07bfe3db1,PodSandboxId:7fc5e2399f4850db7633adb765e5e9b8ae49a564044e08e1b9afd42ede84e911,Metadata:&ContainerMetadata{Name:cor
edns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1734377750706940648,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jqhz4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d168f2c-2593-4ee9-a909-ced7e32adca5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},
},&Container{Id:914213cd5da43b2e02f7ae5ef1ad795000630d45fff565905982bef7c39857ba,PodSandboxId:44cbb4acdf41d8e154dff11cf0fd9ae2796c4d94720b2bf81fb095cbb19a7b6e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1734377747175738840,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8t666,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 397ca8ee-6184-4c67-9cc2-df6a118f9ec7,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6184416c7f2245b19
769b6b6c18aff4e9bfdab07bd818fd77cc33aff5bfa0eab,PodSandboxId:9b376e0f6943772f59f81c70e4b5efdf5563397deaff1bddee775af1780d9ef3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1734377735981567898,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-618388,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c652e97277b9a4e1265beba344d8e0db,},Annotations:map[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15085178c7262da01fec9432c0fc231cb4
3bf620ae6c2ccefc3eb2a726807c4a,PodSandboxId:6f85776c42f4225819f1c1cdc8ac0e8f3a93daca7328eb2f9deec4972480fc0b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1734377735975328343,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-618388,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc4813f902c974c98326634283e67497,},Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a57ad54df62563b286ad667
2f38fdde8d7b769e145a520d7f2b05cedfb36e53,PodSandboxId:2a85ba66f0bb266d60865dc63bd2d06a4a3d0527a7f4709965b125b0297a51e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1734377735966808524,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-618388,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cfdf638021c8a1520d724d230dfdd84,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e31ff64b1d64f3f9a8478142fdaf8ce5bd23879d6318582530651cb287bcd456,PodSan
dboxId:c239d7fd5353d698f1c91c0cd335d3511ef509bed97a7402812a366f79448486,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1734377735959166165,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-618388,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7b7ed9c41ae63edd6868abd3c5d53735,},Annotations:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4cf1d189-90e1-46b1-882f-646910919aef name=/runtime.v1
.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
a83ce5fc3bf5d docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4 2 minutes ago Running nginx 0 012cfbaf366c7 nginx
373b7d1d5c5a8 gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 2 minutes ago Running busybox 0 b9d05acaaff9a busybox
15b941a1e632c registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b 3 minutes ago Running controller 0 3b7647453926f ingress-nginx-controller-56d7c84fd4-rtb85
0389c8ef56a30 a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb 3 minutes ago Exited patch 1 ae114d6dcd6a3 ingress-nginx-admission-patch-lsm5p
dd2e36bd4d459 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f 3 minutes ago Exited create 0 784dd69c8e361 ingress-nginx-admission-create-sgp7s
eac4f297a677b docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef 3 minutes ago Running local-path-provisioner 0 5347fbb80356a local-path-provisioner-76f89f99b5-f8t2h
c876f3d6402bd docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 3 minutes ago Running amd-gpu-device-plugin 0 bb9c50c2e335b amd-gpu-device-plugin-t9xls
273479a02f4bc gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab 4 minutes ago Running minikube-ingress-dns 0 d7150f67485f6 kube-ingress-dns-minikube
78577479cd15e 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 c454f29e8eda3 storage-provisioner
74ac4a483ca6e c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 4 minutes ago Running coredns 0 7fc5e2399f485 coredns-668d6bf9bc-jqhz4
914213cd5da43 040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08 4 minutes ago Running kube-proxy 0 44cbb4acdf41d kube-proxy-8t666
6184416c7f224 a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5 4 minutes ago Running kube-scheduler 0 9b376e0f69437 kube-scheduler-addons-618388
15085178c7262 8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3 4 minutes ago Running kube-controller-manager 0 6f85776c42f42 kube-controller-manager-addons-618388
3a57ad54df625 a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc 4 minutes ago Running etcd 0 2a85ba66f0bb2 etcd-addons-618388
e31ff64b1d64f c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4 4 minutes ago Running kube-apiserver 0 c239d7fd5353d kube-apiserver-addons-618388
==> coredns [74ac4a483ca6e2d67d6a49be154343cc60031a6dd0b78bc7e90fd9f07bfe3db1] <==
[INFO] 10.244.0.7:54220 - 2731 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000299477s
[INFO] 10.244.0.7:54220 - 20337 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000130969s
[INFO] 10.244.0.7:54220 - 4360 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000067619s
[INFO] 10.244.0.7:54220 - 61535 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000138201s
[INFO] 10.244.0.7:54220 - 8374 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000300711s
[INFO] 10.244.0.7:54220 - 54924 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000413344s
[INFO] 10.244.0.7:54220 - 42136 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000087175s
[INFO] 10.244.0.7:44681 - 23139 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000149048s
[INFO] 10.244.0.7:44681 - 22873 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000410459s
[INFO] 10.244.0.7:38297 - 42266 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000109641s
[INFO] 10.244.0.7:38297 - 41994 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00022381s
[INFO] 10.244.0.7:50387 - 21665 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00009814s
[INFO] 10.244.0.7:50387 - 21422 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000213167s
[INFO] 10.244.0.7:38373 - 52908 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000079805s
[INFO] 10.244.0.7:38373 - 53086 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000217665s
[INFO] 10.244.0.23:39422 - 915 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00068431s
[INFO] 10.244.0.23:60480 - 39142 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000154041s
[INFO] 10.244.0.23:42727 - 48733 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000180931s
[INFO] 10.244.0.23:56814 - 26277 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000100708s
[INFO] 10.244.0.23:47272 - 58386 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000119271s
[INFO] 10.244.0.23:52013 - 18859 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000068854s
[INFO] 10.244.0.23:44785 - 22856 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003028236s
[INFO] 10.244.0.23:38917 - 36499 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.004353333s
[INFO] 10.244.0.27:49189 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000686921s
[INFO] 10.244.0.27:56941 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000141104s
==> describe nodes <==
Name: addons-618388
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-618388
kubernetes.io/os=linux
minikube.k8s.io/commit=74e51ab701402ddc00f8ba70f2a2775c7dcd6477
minikube.k8s.io/name=addons-618388
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_12_16T19_35_41_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-618388
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 16 Dec 2024 19:35:38 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-618388
AcquireTime: <unset>
RenewTime: Mon, 16 Dec 2024 19:40:08 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 16 Dec 2024 19:38:23 +0000 Mon, 16 Dec 2024 19:35:36 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 16 Dec 2024 19:38:23 +0000 Mon, 16 Dec 2024 19:35:36 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 16 Dec 2024 19:38:23 +0000 Mon, 16 Dec 2024 19:35:36 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 16 Dec 2024 19:38:23 +0000 Mon, 16 Dec 2024 19:35:42 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.82
Hostname: addons-618388
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912788Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912788Ki
pods: 110
System Info:
Machine ID: 587b661faea140c3b5b4e0025416a25f
System UUID: 587b661f-aea1-40c3-b5b4-e0025416a25f
Boot ID: 5a26730d-8cc2-4b49-afb3-fcb48f5f35dd
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.32.0
Kube-Proxy Version: v1.32.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (14 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m59s
default hello-world-app-7d9564db4-pbr29 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m22s
ingress-nginx ingress-nginx-controller-56d7c84fd4-rtb85 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m18s
kube-system amd-gpu-device-plugin-t9xls 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m25s
kube-system coredns-668d6bf9bc-jqhz4 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m27s
kube-system etcd-addons-618388 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4m33s
kube-system kube-apiserver-addons-618388 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m33s
kube-system kube-controller-manager-addons-618388 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4m31s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m23s
kube-system kube-proxy-8t666 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m27s
kube-system kube-scheduler-addons-618388 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4m31s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m22s
local-path-storage local-path-provisioner-76f89f99b5-f8t2h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m20s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m23s kube-proxy
Normal NodeHasSufficientMemory 4m37s (x8 over 4m37s) kubelet Node addons-618388 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m37s (x8 over 4m37s) kubelet Node addons-618388 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m37s (x7 over 4m37s) kubelet Node addons-618388 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m37s kubelet Updated Node Allocatable limit across pods
Normal Starting 4m31s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m31s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m31s kubelet Node addons-618388 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m31s kubelet Node addons-618388 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m31s kubelet Node addons-618388 status is now: NodeHasSufficientPID
Normal NodeReady 4m30s kubelet Node addons-618388 status is now: NodeReady
Normal RegisteredNode 4m28s node-controller Node addons-618388 event: Registered Node addons-618388 in Controller
==> dmesg <==
[ +5.180632] systemd-fstab-generator[1219]: Ignoring "noauto" option for root device
[ +0.080147] kauditd_printk_skb: 30 callbacks suppressed
[ +4.277520] systemd-fstab-generator[1347]: Ignoring "noauto" option for root device
[ +1.152523] kauditd_printk_skb: 43 callbacks suppressed
[ +5.065925] kauditd_printk_skb: 110 callbacks suppressed
[ +5.014777] kauditd_printk_skb: 83 callbacks suppressed
[Dec16 19:36] kauditd_printk_skb: 117 callbacks suppressed
[ +25.659825] kauditd_printk_skb: 11 callbacks suppressed
[ +11.159570] kauditd_printk_skb: 27 callbacks suppressed
[ +5.745566] kauditd_printk_skb: 2 callbacks suppressed
[ +5.281036] kauditd_printk_skb: 10 callbacks suppressed
[ +5.306466] kauditd_printk_skb: 56 callbacks suppressed
[Dec16 19:37] kauditd_printk_skb: 19 callbacks suppressed
[ +5.698926] kauditd_printk_skb: 11 callbacks suppressed
[ +5.629396] kauditd_printk_skb: 7 callbacks suppressed
[ +5.536843] kauditd_printk_skb: 7 callbacks suppressed
[ +5.138200] kauditd_printk_skb: 4 callbacks suppressed
[ +11.879574] kauditd_printk_skb: 6 callbacks suppressed
[ +8.223527] kauditd_printk_skb: 31 callbacks suppressed
[ +5.100936] kauditd_printk_skb: 42 callbacks suppressed
[ +5.282447] kauditd_printk_skb: 36 callbacks suppressed
[Dec16 19:38] kauditd_printk_skb: 29 callbacks suppressed
[ +6.293016] kauditd_printk_skb: 20 callbacks suppressed
[ +8.353598] kauditd_printk_skb: 40 callbacks suppressed
[Dec16 19:40] kauditd_printk_skb: 49 callbacks suppressed
==> etcd [3a57ad54df62563b286ad6672f38fdde8d7b769e145a520d7f2b05cedfb36e53] <==
{"level":"info","ts":"2024-12-16T19:37:06.038068Z","caller":"traceutil/trace.go:171","msg":"trace[2119050398] transaction","detail":"{read_only:false; response_revision:1118; number_of_response:1; }","duration":"120.978461ms","start":"2024-12-16T19:37:05.917082Z","end":"2024-12-16T19:37:06.038060Z","steps":["trace[2119050398] 'process raft request' (duration: 120.375079ms)"],"step_count":1}
{"level":"info","ts":"2024-12-16T19:37:10.081927Z","caller":"traceutil/trace.go:171","msg":"trace[1767719427] linearizableReadLoop","detail":"{readStateIndex:1171; appliedIndex:1170; }","duration":"159.317157ms","start":"2024-12-16T19:37:09.922596Z","end":"2024-12-16T19:37:10.081913Z","steps":["trace[1767719427] 'read index received' (duration: 159.050824ms)","trace[1767719427] 'applied index is now lower than readState.Index' (duration: 265.884µs)"],"step_count":2}
{"level":"info","ts":"2024-12-16T19:37:10.082035Z","caller":"traceutil/trace.go:171","msg":"trace[790173965] transaction","detail":"{read_only:false; response_revision:1139; number_of_response:1; }","duration":"328.383251ms","start":"2024-12-16T19:37:09.753644Z","end":"2024-12-16T19:37:10.082027Z","steps":["trace[790173965] 'process raft request' (duration: 328.042984ms)"],"step_count":1}
{"level":"warn","ts":"2024-12-16T19:37:10.082115Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T19:37:09.753629Z","time spent":"328.424342ms","remote":"127.0.0.1:59182","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1117 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
{"level":"warn","ts":"2024-12-16T19:37:10.082396Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.798616ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-12-16T19:37:10.082437Z","caller":"traceutil/trace.go:171","msg":"trace[660412259] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1139; }","duration":"159.841945ms","start":"2024-12-16T19:37:09.922588Z","end":"2024-12-16T19:37:10.082430Z","steps":["trace[660412259] 'agreement among raft nodes before linearized reading' (duration: 159.789168ms)"],"step_count":1}
{"level":"info","ts":"2024-12-16T19:37:41.688895Z","caller":"traceutil/trace.go:171","msg":"trace[952250791] linearizableReadLoop","detail":"{readStateIndex:1407; appliedIndex:1406; }","duration":"240.92878ms","start":"2024-12-16T19:37:41.447941Z","end":"2024-12-16T19:37:41.688870Z","steps":["trace[952250791] 'read index received' (duration: 240.723036ms)","trace[952250791] 'applied index is now lower than readState.Index' (duration: 205.31µs)"],"step_count":2}
{"level":"info","ts":"2024-12-16T19:37:41.689095Z","caller":"traceutil/trace.go:171","msg":"trace[1889386926] transaction","detail":"{read_only:false; response_revision:1361; number_of_response:1; }","duration":"247.556638ms","start":"2024-12-16T19:37:41.441529Z","end":"2024-12-16T19:37:41.689086Z","steps":["trace[1889386926] 'process raft request' (duration: 247.147493ms)"],"step_count":1}
{"level":"warn","ts":"2024-12-16T19:37:41.689350Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"241.385942ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:499"}
{"level":"info","ts":"2024-12-16T19:37:41.689378Z","caller":"traceutil/trace.go:171","msg":"trace[1482468366] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1361; }","duration":"241.457752ms","start":"2024-12-16T19:37:41.447914Z","end":"2024-12-16T19:37:41.689372Z","steps":["trace[1482468366] 'agreement among raft nodes before linearized reading' (duration: 241.365968ms)"],"step_count":1}
{"level":"info","ts":"2024-12-16T19:37:41.815753Z","caller":"traceutil/trace.go:171","msg":"trace[1068199803] linearizableReadLoop","detail":"{readStateIndex:1408; appliedIndex:1407; }","duration":"112.742781ms","start":"2024-12-16T19:37:41.702994Z","end":"2024-12-16T19:37:41.815737Z","steps":["trace[1068199803] 'read index received' (duration: 111.526959ms)","trace[1068199803] 'applied index is now lower than readState.Index' (duration: 1.215222ms)"],"step_count":2}
{"level":"info","ts":"2024-12-16T19:37:41.815980Z","caller":"traceutil/trace.go:171","msg":"trace[1142545525] transaction","detail":"{read_only:false; response_revision:1362; number_of_response:1; }","duration":"117.734669ms","start":"2024-12-16T19:37:41.698236Z","end":"2024-12-16T19:37:41.815971Z","steps":["trace[1142545525] 'process raft request' (duration: 116.326442ms)"],"step_count":1}
{"level":"warn","ts":"2024-12-16T19:37:41.816173Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"113.162779ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-12-16T19:37:41.816196Z","caller":"traceutil/trace.go:171","msg":"trace[1977513151] range","detail":"{range_begin:/registry/secrets; range_end:; response_count:0; response_revision:1362; }","duration":"113.217626ms","start":"2024-12-16T19:37:41.702971Z","end":"2024-12-16T19:37:41.816189Z","steps":["trace[1977513151] 'agreement among raft nodes before linearized reading' (duration: 113.163549ms)"],"step_count":1}
{"level":"warn","ts":"2024-12-16T19:37:41.816283Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.291883ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-12-16T19:37:41.816295Z","caller":"traceutil/trace.go:171","msg":"trace[190069588] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1362; }","duration":"104.32601ms","start":"2024-12-16T19:37:41.711965Z","end":"2024-12-16T19:37:41.816291Z","steps":["trace[190069588] 'agreement among raft nodes before linearized reading' (duration: 104.302311ms)"],"step_count":1}
{"level":"info","ts":"2024-12-16T19:37:44.588191Z","caller":"traceutil/trace.go:171","msg":"trace[252821531] transaction","detail":"{read_only:false; response_revision:1379; number_of_response:1; }","duration":"260.43171ms","start":"2024-12-16T19:37:44.327742Z","end":"2024-12-16T19:37:44.588174Z","steps":["trace[252821531] 'process raft request' (duration: 259.918301ms)"],"step_count":1}
{"level":"warn","ts":"2024-12-16T19:37:49.566066Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.963537ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/registry-proxy\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-12-16T19:37:49.566115Z","caller":"traceutil/trace.go:171","msg":"trace[391387733] range","detail":"{range_begin:/registry/daemonsets/kube-system/registry-proxy; range_end:; response_count:0; response_revision:1446; }","duration":"184.063025ms","start":"2024-12-16T19:37:49.382042Z","end":"2024-12-16T19:37:49.566105Z","steps":["trace[391387733] 'range keys from in-memory index tree' (duration: 183.932072ms)"],"step_count":1}
{"level":"warn","ts":"2024-12-16T19:37:49.566263Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.350139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/registry-proxy-49ln5\" limit:1 ","response":"range_response_count:1 size:4039"}
{"level":"info","ts":"2024-12-16T19:37:49.566278Z","caller":"traceutil/trace.go:171","msg":"trace[1060044770] range","detail":"{range_begin:/registry/pods/kube-system/registry-proxy-49ln5; range_end:; response_count:1; response_revision:1446; }","duration":"184.442363ms","start":"2024-12-16T19:37:49.381831Z","end":"2024-12-16T19:37:49.566274Z","steps":["trace[1060044770] 'range keys from in-memory index tree' (duration: 184.093609ms)"],"step_count":1}
{"level":"warn","ts":"2024-12-16T19:37:49.566474Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.832845ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregated-metrics-reader\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-12-16T19:37:49.566496Z","caller":"traceutil/trace.go:171","msg":"trace[855124741] range","detail":"{range_begin:/registry/clusterroles/system:aggregated-metrics-reader; range_end:; response_count:0; response_revision:1446; }","duration":"183.939334ms","start":"2024-12-16T19:37:49.382551Z","end":"2024-12-16T19:37:49.566490Z","steps":["trace[855124741] 'range keys from in-memory index tree' (duration: 183.792308ms)"],"step_count":1}
{"level":"warn","ts":"2024-12-16T19:37:49.567339Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.550277ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2606902285041163582 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/registry-proxy-49ln5.1811bf7baaa05d24\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/registry-proxy-49ln5.1811bf7baaa05d24\" value_size:651 lease:2606902285041163266 >> failure:<>>","response":"size:16"}
{"level":"info","ts":"2024-12-16T19:37:49.567419Z","caller":"traceutil/trace.go:171","msg":"trace[1325591466] transaction","detail":"{read_only:false; response_revision:1447; number_of_response:1; }","duration":"184.61541ms","start":"2024-12-16T19:37:49.382794Z","end":"2024-12-16T19:37:49.567410Z","steps":["trace[1325591466] 'process raft request' (duration: 15.732254ms)","trace[1325591466] 'compare' (duration: 167.163416ms)"],"step_count":2}
==> kernel <==
19:40:12 up 5 min, 0 users, load average: 0.55, 1.15, 0.61
Linux addons-618388 5.10.207 #1 SMP Thu Dec 12 23:38:00 UTC 2024 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [e31ff64b1d64f3f9a8478142fdaf8ce5bd23879d6318582530651cb287bcd456] <==
I1216 19:36:29.303274 1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
E1216 19:36:29.320633 1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
E1216 19:37:20.996203 1 conn.go:339] Error on socket receive: read tcp 192.168.39.82:8443->192.168.39.1:56722: use of closed network connection
E1216 19:37:21.187221 1 conn.go:339] Error on socket receive: read tcp 192.168.39.82:8443->192.168.39.1:56746: use of closed network connection
I1216 19:37:30.632686 1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.15.9"}
I1216 19:37:49.970356 1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
I1216 19:37:50.154148 1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.67.217"}
I1216 19:37:53.177610 1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I1216 19:37:56.178071 1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
W1216 19:37:57.311201 1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
I1216 19:38:17.523843 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1216 19:38:17.523914 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1216 19:38:17.545884 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1216 19:38:17.545947 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1216 19:38:17.598547 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1216 19:38:17.598816 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1216 19:38:17.700095 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1216 19:38:17.700305 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1216 19:38:17.705832 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1216 19:38:17.705876 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W1216 19:38:18.701261 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W1216 19:38:18.706307 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W1216 19:38:18.815039 1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
I1216 19:38:30.293022 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
I1216 19:40:10.536303 1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.10.195"}
==> kube-controller-manager [15085178c7262da01fec9432c0fc231cb43bf620ae6c2ccefc3eb2a726807c4a] <==
E1216 19:39:09.764522 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W1216 19:39:35.451267 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E1216 19:39:35.452328 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
W1216 19:39:35.453323 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1216 19:39:35.453354 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W1216 19:39:37.570212 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E1216 19:39:37.571438 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
W1216 19:39:37.572354 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1216 19:39:37.572435 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W1216 19:39:44.463622 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E1216 19:39:44.464587 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
W1216 19:39:44.465550 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1216 19:39:44.465598 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W1216 19:40:05.447971 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E1216 19:40:05.449281 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
W1216 19:40:05.450234 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1216 19:40:05.450316 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W1216 19:40:10.064188 1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
E1216 19:40:10.065307 1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
W1216 19:40:10.066193 1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1216 19:40:10.066226 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I1216 19:40:10.363378 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="41.695368ms"
I1216 19:40:10.387766 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="24.226465ms"
I1216 19:40:10.406831 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="18.923445ms"
I1216 19:40:10.407097 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="110.325µs"
==> kube-proxy [914213cd5da43b2e02f7ae5ef1ad795000630d45fff565905982bef7c39857ba] <==
add table ip kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^
>
E1216 19:35:48.357970 1 proxier.go:733] "Error cleaning up nftables rules" err=<
could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
add table ip6 kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^^
>
I1216 19:35:48.369141 1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.82"]
E1216 19:35:48.369207 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1216 19:35:48.483941 1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
I1216 19:35:48.483971 1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1216 19:35:48.483992 1 server_linux.go:170] "Using iptables Proxier"
I1216 19:35:48.499561 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1216 19:35:48.499903 1 server.go:497] "Version info" version="v1.32.0"
I1216 19:35:48.499916 1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1216 19:35:48.506236 1 config.go:199] "Starting service config controller"
I1216 19:35:48.506335 1 shared_informer.go:313] Waiting for caches to sync for service config
I1216 19:35:48.506419 1 config.go:105] "Starting endpoint slice config controller"
I1216 19:35:48.506424 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I1216 19:35:48.511294 1 config.go:329] "Starting node config controller"
I1216 19:35:48.511320 1 shared_informer.go:313] Waiting for caches to sync for node config
I1216 19:35:48.608236 1 shared_informer.go:320] Caches are synced for endpoint slice config
I1216 19:35:48.608274 1 shared_informer.go:320] Caches are synced for service config
I1216 19:35:48.618826 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [6184416c7f2245b19769b6b6c18aff4e9bfdab07bd818fd77cc33aff5bfa0eab] <==
W1216 19:35:38.372443 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1216 19:35:38.372870 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W1216 19:35:39.308036 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1216 19:35:39.308087 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W1216 19:35:39.334838 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1216 19:35:39.335184 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W1216 19:35:39.429411 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1216 19:35:39.429598 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W1216 19:35:39.445579 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E1216 19:35:39.445785 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
W1216 19:35:39.460982 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
E1216 19:35:39.461118 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W1216 19:35:39.547899 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1216 19:35:39.548066 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W1216 19:35:39.551809 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1216 19:35:39.551899 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W1216 19:35:39.569034 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1216 19:35:39.569222 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W1216 19:35:39.589889 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1216 19:35:39.590008 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W1216 19:35:39.708266 1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1216 19:35:39.708490 1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W1216 19:35:39.721372 1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1216 19:35:39.721877 1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
I1216 19:35:42.369034 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Dec 16 19:39:41 addons-618388 kubelet[1226]: Perhaps ip6tables or your kernel needs to be upgraded.
Dec 16 19:39:41 addons-618388 kubelet[1226]: > table="nat" chain="KUBE-KUBELET-CANARY"
Dec 16 19:39:41 addons-618388 kubelet[1226]: E1216 19:39:41.605412 1226 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734377981604891630,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595926,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Dec 16 19:39:41 addons-618388 kubelet[1226]: E1216 19:39:41.605452 1226 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734377981604891630,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595926,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Dec 16 19:39:51 addons-618388 kubelet[1226]: E1216 19:39:51.608782 1226 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734377991608338970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595926,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Dec 16 19:39:51 addons-618388 kubelet[1226]: E1216 19:39:51.608948 1226 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734377991608338970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595926,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Dec 16 19:39:56 addons-618388 kubelet[1226]: I1216 19:39:56.288650 1226 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Dec 16 19:40:01 addons-618388 kubelet[1226]: E1216 19:40:01.611662 1226 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734378001611183959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595926,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Dec 16 19:40:01 addons-618388 kubelet[1226]: E1216 19:40:01.611687 1226 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734378001611183959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595926,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Dec 16 19:40:10 addons-618388 kubelet[1226]: I1216 19:40:10.369553 1226 memory_manager.go:355] "RemoveStaleState removing state" podUID="7c08e8c6-a4d2-48d1-8641-fce068dbafa2" containerName="csi-resizer"
Dec 16 19:40:10 addons-618388 kubelet[1226]: I1216 19:40:10.369584 1226 memory_manager.go:355] "RemoveStaleState removing state" podUID="c682dd96-c52d-4c59-8b61-6fb5e8f9027a" containerName="csi-external-health-monitor-controller"
Dec 16 19:40:10 addons-618388 kubelet[1226]: I1216 19:40:10.369592 1226 memory_manager.go:355] "RemoveStaleState removing state" podUID="4a9bc6bd-7ed3-4b60-9f26-33fb55f94e9e" containerName="volume-snapshot-controller"
Dec 16 19:40:10 addons-618388 kubelet[1226]: I1216 19:40:10.369597 1226 memory_manager.go:355] "RemoveStaleState removing state" podUID="c682dd96-c52d-4c59-8b61-6fb5e8f9027a" containerName="csi-snapshotter"
Dec 16 19:40:10 addons-618388 kubelet[1226]: I1216 19:40:10.369603 1226 memory_manager.go:355] "RemoveStaleState removing state" podUID="c682dd96-c52d-4c59-8b61-6fb5e8f9027a" containerName="hostpath"
Dec 16 19:40:10 addons-618388 kubelet[1226]: I1216 19:40:10.369607 1226 memory_manager.go:355] "RemoveStaleState removing state" podUID="c682dd96-c52d-4c59-8b61-6fb5e8f9027a" containerName="csi-provisioner"
Dec 16 19:40:10 addons-618388 kubelet[1226]: I1216 19:40:10.369612 1226 memory_manager.go:355] "RemoveStaleState removing state" podUID="2af3ed28-f280-421a-941f-b1c7d9a7b143" containerName="helper-pod"
Dec 16 19:40:10 addons-618388 kubelet[1226]: I1216 19:40:10.369618 1226 memory_manager.go:355] "RemoveStaleState removing state" podUID="c8817fea-96d6-4405-8c50-674c5e47b8c7" containerName="volume-snapshot-controller"
Dec 16 19:40:10 addons-618388 kubelet[1226]: I1216 19:40:10.369623 1226 memory_manager.go:355] "RemoveStaleState removing state" podUID="c682dd96-c52d-4c59-8b61-6fb5e8f9027a" containerName="node-driver-registrar"
Dec 16 19:40:10 addons-618388 kubelet[1226]: I1216 19:40:10.369628 1226 memory_manager.go:355] "RemoveStaleState removing state" podUID="c331e026-63d4-4f10-a0c5-3bf7d22b1740" containerName="task-pv-container"
Dec 16 19:40:10 addons-618388 kubelet[1226]: I1216 19:40:10.369632 1226 memory_manager.go:355] "RemoveStaleState removing state" podUID="c682dd96-c52d-4c59-8b61-6fb5e8f9027a" containerName="liveness-probe"
Dec 16 19:40:10 addons-618388 kubelet[1226]: I1216 19:40:10.369637 1226 memory_manager.go:355] "RemoveStaleState removing state" podUID="a6ff89b4-0d31-4e72-826a-12cf756c7e4c" containerName="csi-attacher"
Dec 16 19:40:10 addons-618388 kubelet[1226]: I1216 19:40:10.458541 1226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhbv6\" (UniqueName: \"kubernetes.io/projected/3f8649c3-4648-465c-ab4d-19179cbee81d-kube-api-access-jhbv6\") pod \"hello-world-app-7d9564db4-pbr29\" (UID: \"3f8649c3-4648-465c-ab4d-19179cbee81d\") " pod="default/hello-world-app-7d9564db4-pbr29"
Dec 16 19:40:11 addons-618388 kubelet[1226]: E1216 19:40:11.615842 1226 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734378011615240313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595926,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Dec 16 19:40:11 addons-618388 kubelet[1226]: E1216 19:40:11.615869 1226 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734378011615240313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595926,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
Dec 16 19:40:12 addons-618388 kubelet[1226]: I1216 19:40:12.288174 1226 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-t9xls" secret="" err="secret \"gcp-auth\" not found"
==> storage-provisioner [78577479cd15e2342c6dd4796ceb214c0d74c7572c9ae96444bdb9b7e2cba77d] <==
I1216 19:35:53.221430 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1216 19:35:53.277565 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1216 19:35:53.283325 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1216 19:35:53.396537 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1216 19:35:53.396686 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-618388_bfcdc5f3-9c79-4ede-86c9-457166d105fe!
I1216 19:35:53.397653 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e581e24f-4a34-4429-b10b-04c523c86f00", APIVersion:"v1", ResourceVersion:"642", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-618388_bfcdc5f3-9c79-4ede-86c9-457166d105fe became leader
I1216 19:35:53.524028 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-618388_bfcdc5f3-9c79-4ede-86c9-457166d105fe!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-618388 -n addons-618388
helpers_test.go:261: (dbg) Run: kubectl --context addons-618388 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-pbr29 ingress-nginx-admission-create-sgp7s ingress-nginx-admission-patch-lsm5p
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-618388 describe pod hello-world-app-7d9564db4-pbr29 ingress-nginx-admission-create-sgp7s ingress-nginx-admission-patch-lsm5p
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-618388 describe pod hello-world-app-7d9564db4-pbr29 ingress-nginx-admission-create-sgp7s ingress-nginx-admission-patch-lsm5p: exit status 1 (68.739624ms)
-- stdout --
Name: hello-world-app-7d9564db4-pbr29
Namespace: default
Priority: 0
Service Account: default
Node: addons-618388/192.168.39.82
Start Time: Mon, 16 Dec 2024 19:40:10 +0000
Labels: app=hello-world-app
pod-template-hash=7d9564db4
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hello-world-app-7d9564db4
Containers:
hello-world-app:
Container ID:
Image: docker.io/kicbase/echo-server:1.0
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jhbv6 (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-jhbv6:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3s default-scheduler Successfully assigned default/hello-world-app-7d9564db4-pbr29 to addons-618388
Normal Pulling 3s kubelet Pulling image "docker.io/kicbase/echo-server:1.0"
Normal Pulled 1s kubelet Successfully pulled image "docker.io/kicbase/echo-server:1.0" in 1.423s (1.423s including waiting). Image size: 4944818 bytes.
Normal Created 1s kubelet Created container: hello-world-app
Normal Started 1s kubelet Started container hello-world-app
-- /stdout --
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-sgp7s" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-lsm5p" not found
** /stderr **
helpers_test.go:279: kubectl --context addons-618388 describe pod hello-world-app-7d9564db4-pbr29 ingress-nginx-admission-create-sgp7s ingress-nginx-admission-patch-lsm5p: exit status 1
addons_test.go:992: (dbg) Run: out/minikube-linux-amd64 -p addons-618388 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-618388 addons disable ingress-dns --alsologtostderr -v=1: (1.218647901s)
addons_test.go:992: (dbg) Run: out/minikube-linux-amd64 -p addons-618388 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-618388 addons disable ingress --alsologtostderr -v=1: (7.758837783s)
--- FAIL: TestAddons/parallel/Ingress (152.48s)