=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run: kubectl --context addons-093588 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run: kubectl --context addons-093588 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run: kubectl --context addons-093588 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9cf016d6-ed93-4bb5-94f4-88b82ea95ba5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9cf016d6-ed93-4bb5-94f4-88b82ea95ba5] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003858989s
I1202 11:34:05.895279 13416 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run: out/minikube-linux-amd64 -p addons-093588 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
2024/12/02 11:34:05 [ERR] GET http://192.168.39.203:5000 request failed: Get "http://192.168.39.203:5000": dial tcp 192.168.39.203:5000: connect: connection refused
I1202 11:34:05.900016 13416 retry.go:31] will retry after 856.565136ms: GET http://192.168.39.203:5000 giving up after 5 attempt(s): Get "http://192.168.39.203:5000": dial tcp 192.168.39.203:5000: connect: connection refused
2024/12/02 11:34:06 [DEBUG] GET http://192.168.39.203:5000
2024/12/02 11:34:06 [ERR] GET http://192.168.39.203:5000 request failed: Get "http://192.168.39.203:5000": dial tcp 192.168.39.203:5000: connect: connection refused
2024/12/02 11:34:06 [DEBUG] GET http://192.168.39.203:5000: retrying in 1s (4 left)
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-093588 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.044011418s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run: kubectl --context addons-093588 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run: out/minikube-linux-amd64 -p addons-093588 ip
addons_test.go:297: (dbg) Run: nslookup hello-john.test 192.168.39.203
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-093588 -n addons-093588
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 -p addons-093588 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-093588 logs -n 25: (1.421739284s)
helpers_test.go:252: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| delete | --all | minikube | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | 02 Dec 24 11:30 UTC |
| delete | -p download-only-257770 | download-only-257770 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | 02 Dec 24 11:30 UTC |
| delete | -p download-only-407914 | download-only-407914 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | 02 Dec 24 11:30 UTC |
| delete | -p download-only-257770 | download-only-257770 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | 02 Dec 24 11:30 UTC |
| start | --download-only -p | binary-mirror-408241 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | |
| | binary-mirror-408241 | | | | | |
| | --alsologtostderr | | | | | |
| | --binary-mirror | | | | | |
| | http://127.0.0.1:43999 | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| delete | -p binary-mirror-408241 | binary-mirror-408241 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | 02 Dec 24 11:30 UTC |
| addons | disable dashboard -p | addons-093588 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | |
| | addons-093588 | | | | | |
| addons | enable dashboard -p | addons-093588 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | |
| | addons-093588 | | | | | |
| start | -p addons-093588 --wait=true | addons-093588 | jenkins | v1.34.0 | 02 Dec 24 11:30 UTC | 02 Dec 24 11:32 UTC |
| | --memory=4000 --alsologtostderr | | | | | |
| | --addons=registry | | | | | |
| | --addons=metrics-server | | | | | |
| | --addons=volumesnapshots | | | | | |
| | --addons=csi-hostpath-driver | | | | | |
| | --addons=gcp-auth | | | | | |
| | --addons=cloud-spanner | | | | | |
| | --addons=inspektor-gadget | | | | | |
| | --addons=nvidia-device-plugin | | | | | |
| | --addons=yakd --addons=volcano | | | | | |
| | --addons=amd-gpu-device-plugin | | | | | |
| | --driver=kvm2 | | | | | |
| | --container-runtime=crio | | | | | |
| | --addons=ingress | | | | | |
| | --addons=ingress-dns | | | | | |
| | --addons=storage-provisioner-rancher | | | | | |
| addons | addons-093588 addons disable | addons-093588 | jenkins | v1.34.0 | 02 Dec 24 11:32 UTC | 02 Dec 24 11:32 UTC |
| | volcano --alsologtostderr -v=1 | | | | | |
| addons | addons-093588 addons disable | addons-093588 | jenkins | v1.34.0 | 02 Dec 24 11:32 UTC | 02 Dec 24 11:33 UTC |
| | gcp-auth --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | enable headlamp | addons-093588 | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
| | -p addons-093588 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-093588 addons | addons-093588 | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
| | disable nvidia-device-plugin | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ssh | addons-093588 ssh cat | addons-093588 | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
| | /opt/local-path-provisioner/pvc-74013b2b-13f5-4c56-bebc-ca88a0c9e4c1_default_test-pvc/file1 | | | | | |
| addons | addons-093588 addons disable | addons-093588 | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
| | storage-provisioner-rancher | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-093588 addons disable | addons-093588 | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
| | headlamp --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| addons | addons-093588 addons disable | addons-093588 | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
| | yakd --alsologtostderr -v=1 | | | | | |
| ip | addons-093588 ip | addons-093588 | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
| addons | addons-093588 addons | addons-093588 | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
| | disable cloud-spanner | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-093588 addons | addons-093588 | jenkins | v1.34.0 | 02 Dec 24 11:33 UTC | 02 Dec 24 11:33 UTC |
| | disable inspektor-gadget | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| ssh | addons-093588 ssh curl -s | addons-093588 | jenkins | v1.34.0 | 02 Dec 24 11:34 UTC | |
| | http://127.0.0.1/ -H 'Host: | | | | | |
| | nginx.example.com' | | | | | |
| addons | addons-093588 addons | addons-093588 | jenkins | v1.34.0 | 02 Dec 24 11:34 UTC | 02 Dec 24 11:34 UTC |
| | disable volumesnapshots | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-093588 addons | addons-093588 | jenkins | v1.34.0 | 02 Dec 24 11:34 UTC | 02 Dec 24 11:34 UTC |
| | disable csi-hostpath-driver | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| addons | addons-093588 addons disable | addons-093588 | jenkins | v1.34.0 | 02 Dec 24 11:34 UTC | 02 Dec 24 11:34 UTC |
| | registry --alsologtostderr | | | | | |
| | -v=1 | | | | | |
| ip | addons-093588 ip | addons-093588 | jenkins | v1.34.0 | 02 Dec 24 11:36 UTC | 02 Dec 24 11:36 UTC |
|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/12/02 11:30:37
Running on machine: ubuntu-20-agent-13
Binary: Built with gc go1.23.2 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1202 11:30:37.455381 14046 out.go:345] Setting OutFile to fd 1 ...
I1202 11:30:37.455480 14046 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1202 11:30:37.455489 14046 out.go:358] Setting ErrFile to fd 2...
I1202 11:30:37.455493 14046 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1202 11:30:37.455668 14046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20033-6257/.minikube/bin
I1202 11:30:37.456323 14046 out.go:352] Setting JSON to false
I1202 11:30:37.457128 14046 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":789,"bootTime":1733138248,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1202 11:30:37.457182 14046 start.go:139] virtualization: kvm guest
I1202 11:30:37.459050 14046 out.go:177] * [addons-093588] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
I1202 11:30:37.460220 14046 notify.go:220] Checking for updates...
I1202 11:30:37.460254 14046 out.go:177] - MINIKUBE_LOCATION=20033
I1202 11:30:37.461315 14046 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1202 11:30:37.462351 14046 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20033-6257/kubeconfig
I1202 11:30:37.463400 14046 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20033-6257/.minikube
I1202 11:30:37.464380 14046 out.go:177] - MINIKUBE_BIN=out/minikube-linux-amd64
I1202 11:30:37.465325 14046 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1202 11:30:37.466424 14046 driver.go:394] Setting default libvirt URI to qemu:///system
I1202 11:30:37.495915 14046 out.go:177] * Using the kvm2 driver based on user configuration
I1202 11:30:37.497029 14046 start.go:297] selected driver: kvm2
I1202 11:30:37.497047 14046 start.go:901] validating driver "kvm2" against <nil>
I1202 11:30:37.497060 14046 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1202 11:30:37.497712 14046 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1202 11:30:37.497776 14046 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20033-6257/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1202 11:30:37.512199 14046 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
I1202 11:30:37.512258 14046 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I1202 11:30:37.512498 14046 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1202 11:30:37.512526 14046 cni.go:84] Creating CNI manager for ""
I1202 11:30:37.512569 14046 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1202 11:30:37.512581 14046 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1202 11:30:37.512629 14046 start.go:340] cluster config:
{Name:addons-093588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-093588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
I1202 11:30:37.512716 14046 iso.go:125] acquiring lock: {Name:mk7f187f0058b5a97a40305cfb11719a190cb753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1202 11:30:37.515047 14046 out.go:177] * Starting "addons-093588" primary control-plane node in "addons-093588" cluster
I1202 11:30:37.516087 14046 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1202 11:30:37.516117 14046 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
I1202 11:30:37.516127 14046 cache.go:56] Caching tarball of preloaded images
I1202 11:30:37.516196 14046 preload.go:172] Found /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1202 11:30:37.516208 14046 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
I1202 11:30:37.516518 14046 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/config.json ...
I1202 11:30:37.516542 14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/config.json: {Name:mk15de776ac6faf6fd8a23110b6fb90c273126c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 11:30:37.516686 14046 start.go:360] acquireMachinesLock for addons-093588: {Name:mkf91465ea96483fd1507740f31b2b9ab7f9f919 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1202 11:30:37.516736 14046 start.go:364] duration metric: took 35.877µs to acquireMachinesLock for "addons-093588"
I1202 11:30:37.516755 14046 start.go:93] Provisioning new machine with config: &{Name:addons-093588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-093588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1202 11:30:37.516809 14046 start.go:125] createHost starting for "" (driver="kvm2")
I1202 11:30:37.518955 14046 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
I1202 11:30:37.519064 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:30:37.519111 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:30:37.532176 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35949
I1202 11:30:37.532631 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:30:37.533117 14046 main.go:141] libmachine: Using API Version 1
I1202 11:30:37.533134 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:30:37.533432 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:30:37.533598 14046 main.go:141] libmachine: (addons-093588) Calling .GetMachineName
I1202 11:30:37.533741 14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
I1202 11:30:37.533872 14046 start.go:159] libmachine.API.Create for "addons-093588" (driver="kvm2")
I1202 11:30:37.533900 14046 client.go:168] LocalClient.Create starting
I1202 11:30:37.533936 14046 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem
I1202 11:30:37.890362 14046 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem
I1202 11:30:38.028981 14046 main.go:141] libmachine: Running pre-create checks...
I1202 11:30:38.029000 14046 main.go:141] libmachine: (addons-093588) Calling .PreCreateCheck
I1202 11:30:38.029460 14046 main.go:141] libmachine: (addons-093588) Calling .GetConfigRaw
I1202 11:30:38.029866 14046 main.go:141] libmachine: Creating machine...
I1202 11:30:38.029880 14046 main.go:141] libmachine: (addons-093588) Calling .Create
I1202 11:30:38.030036 14046 main.go:141] libmachine: (addons-093588) Creating KVM machine...
I1202 11:30:38.031150 14046 main.go:141] libmachine: (addons-093588) DBG | found existing default KVM network
I1202 11:30:38.031811 14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:38.031684 14068 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002011f0}
I1202 11:30:38.031852 14046 main.go:141] libmachine: (addons-093588) DBG | created network xml:
I1202 11:30:38.031872 14046 main.go:141] libmachine: (addons-093588) DBG | <network>
I1202 11:30:38.031885 14046 main.go:141] libmachine: (addons-093588) DBG | <name>mk-addons-093588</name>
I1202 11:30:38.031900 14046 main.go:141] libmachine: (addons-093588) DBG | <dns enable='no'/>
I1202 11:30:38.031929 14046 main.go:141] libmachine: (addons-093588) DBG |
I1202 11:30:38.031958 14046 main.go:141] libmachine: (addons-093588) DBG | <ip address='192.168.39.1' netmask='255.255.255.0'>
I1202 11:30:38.031973 14046 main.go:141] libmachine: (addons-093588) DBG | <dhcp>
I1202 11:30:38.031985 14046 main.go:141] libmachine: (addons-093588) DBG | <range start='192.168.39.2' end='192.168.39.253'/>
I1202 11:30:38.031991 14046 main.go:141] libmachine: (addons-093588) DBG | </dhcp>
I1202 11:30:38.031998 14046 main.go:141] libmachine: (addons-093588) DBG | </ip>
I1202 11:30:38.032003 14046 main.go:141] libmachine: (addons-093588) DBG |
I1202 11:30:38.032010 14046 main.go:141] libmachine: (addons-093588) DBG | </network>
I1202 11:30:38.032020 14046 main.go:141] libmachine: (addons-093588) DBG |
I1202 11:30:38.037024 14046 main.go:141] libmachine: (addons-093588) DBG | trying to create private KVM network mk-addons-093588 192.168.39.0/24...
I1202 11:30:38.095436 14046 main.go:141] libmachine: (addons-093588) DBG | private KVM network mk-addons-093588 192.168.39.0/24 created
I1202 11:30:38.095476 14046 main.go:141] libmachine: (addons-093588) Setting up store path in /home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588 ...
I1202 11:30:38.095496 14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:38.095389 14068 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20033-6257/.minikube
I1202 11:30:38.095510 14046 main.go:141] libmachine: (addons-093588) Building disk image from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
I1202 11:30:38.095536 14046 main.go:141] libmachine: (addons-093588) Downloading /home/jenkins/minikube-integration/20033-6257/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20033-6257/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
I1202 11:30:38.351649 14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:38.351512 14068 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa...
I1202 11:30:38.416171 14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:38.416080 14068 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/addons-093588.rawdisk...
I1202 11:30:38.416198 14046 main.go:141] libmachine: (addons-093588) DBG | Writing magic tar header
I1202 11:30:38.416275 14046 main.go:141] libmachine: (addons-093588) DBG | Writing SSH key tar header
I1202 11:30:38.416312 14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:38.416182 14068 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588 ...
I1202 11:30:38.416332 14046 main.go:141] libmachine: (addons-093588) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588 (perms=drwx------)
I1202 11:30:38.416347 14046 main.go:141] libmachine: (addons-093588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588
I1202 11:30:38.416361 14046 main.go:141] libmachine: (addons-093588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube/machines
I1202 11:30:38.416368 14046 main.go:141] libmachine: (addons-093588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257/.minikube
I1202 11:30:38.416379 14046 main.go:141] libmachine: (addons-093588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20033-6257
I1202 11:30:38.416384 14046 main.go:141] libmachine: (addons-093588) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
I1202 11:30:38.416391 14046 main.go:141] libmachine: (addons-093588) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube/machines (perms=drwxr-xr-x)
I1202 11:30:38.416403 14046 main.go:141] libmachine: (addons-093588) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257/.minikube (perms=drwxr-xr-x)
I1202 11:30:38.416414 14046 main.go:141] libmachine: (addons-093588) Setting executable bit set on /home/jenkins/minikube-integration/20033-6257 (perms=drwxrwxr-x)
I1202 11:30:38.416422 14046 main.go:141] libmachine: (addons-093588) DBG | Checking permissions on dir: /home/jenkins
I1202 11:30:38.416433 14046 main.go:141] libmachine: (addons-093588) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1202 11:30:38.416445 14046 main.go:141] libmachine: (addons-093588) DBG | Checking permissions on dir: /home
I1202 11:30:38.416458 14046 main.go:141] libmachine: (addons-093588) DBG | Skipping /home - not owner
I1202 11:30:38.416469 14046 main.go:141] libmachine: (addons-093588) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1202 11:30:38.416477 14046 main.go:141] libmachine: (addons-093588) Creating domain...
I1202 11:30:38.417348 14046 main.go:141] libmachine: (addons-093588) define libvirt domain using xml:
I1202 11:30:38.417385 14046 main.go:141] libmachine: (addons-093588) <domain type='kvm'>
I1202 11:30:38.417398 14046 main.go:141] libmachine: (addons-093588) <name>addons-093588</name>
I1202 11:30:38.417413 14046 main.go:141] libmachine: (addons-093588) <memory unit='MiB'>4000</memory>
I1202 11:30:38.417424 14046 main.go:141] libmachine: (addons-093588) <vcpu>2</vcpu>
I1202 11:30:38.417430 14046 main.go:141] libmachine: (addons-093588) <features>
I1202 11:30:38.417442 14046 main.go:141] libmachine: (addons-093588) <acpi/>
I1202 11:30:38.417452 14046 main.go:141] libmachine: (addons-093588) <apic/>
I1202 11:30:38.417460 14046 main.go:141] libmachine: (addons-093588) <pae/>
I1202 11:30:38.417469 14046 main.go:141] libmachine: (addons-093588)
I1202 11:30:38.417482 14046 main.go:141] libmachine: (addons-093588) </features>
I1202 11:30:38.417492 14046 main.go:141] libmachine: (addons-093588) <cpu mode='host-passthrough'>
I1202 11:30:38.417497 14046 main.go:141] libmachine: (addons-093588)
I1202 11:30:38.417520 14046 main.go:141] libmachine: (addons-093588) </cpu>
I1202 11:30:38.417539 14046 main.go:141] libmachine: (addons-093588) <os>
I1202 11:30:38.417549 14046 main.go:141] libmachine: (addons-093588) <type>hvm</type>
I1202 11:30:38.417564 14046 main.go:141] libmachine: (addons-093588) <boot dev='cdrom'/>
I1202 11:30:38.417575 14046 main.go:141] libmachine: (addons-093588) <boot dev='hd'/>
I1202 11:30:38.417584 14046 main.go:141] libmachine: (addons-093588) <bootmenu enable='no'/>
I1202 11:30:38.417595 14046 main.go:141] libmachine: (addons-093588) </os>
I1202 11:30:38.417604 14046 main.go:141] libmachine: (addons-093588) <devices>
I1202 11:30:38.417614 14046 main.go:141] libmachine: (addons-093588) <disk type='file' device='cdrom'>
I1202 11:30:38.417628 14046 main.go:141] libmachine: (addons-093588) <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/boot2docker.iso'/>
I1202 11:30:38.417650 14046 main.go:141] libmachine: (addons-093588) <target dev='hdc' bus='scsi'/>
I1202 11:30:38.417668 14046 main.go:141] libmachine: (addons-093588) <readonly/>
I1202 11:30:38.417681 14046 main.go:141] libmachine: (addons-093588) </disk>
I1202 11:30:38.417693 14046 main.go:141] libmachine: (addons-093588) <disk type='file' device='disk'>
I1202 11:30:38.417706 14046 main.go:141] libmachine: (addons-093588) <driver name='qemu' type='raw' cache='default' io='threads' />
I1202 11:30:38.417720 14046 main.go:141] libmachine: (addons-093588) <source file='/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/addons-093588.rawdisk'/>
I1202 11:30:38.417732 14046 main.go:141] libmachine: (addons-093588) <target dev='hda' bus='virtio'/>
I1202 11:30:38.417743 14046 main.go:141] libmachine: (addons-093588) </disk>
I1202 11:30:38.417756 14046 main.go:141] libmachine: (addons-093588) <interface type='network'>
I1202 11:30:38.417768 14046 main.go:141] libmachine: (addons-093588) <source network='mk-addons-093588'/>
I1202 11:30:38.417780 14046 main.go:141] libmachine: (addons-093588) <model type='virtio'/>
I1202 11:30:38.417789 14046 main.go:141] libmachine: (addons-093588) </interface>
I1202 11:30:38.417800 14046 main.go:141] libmachine: (addons-093588) <interface type='network'>
I1202 11:30:38.417815 14046 main.go:141] libmachine: (addons-093588) <source network='default'/>
I1202 11:30:38.417824 14046 main.go:141] libmachine: (addons-093588) <model type='virtio'/>
I1202 11:30:38.417832 14046 main.go:141] libmachine: (addons-093588) </interface>
I1202 11:30:38.417847 14046 main.go:141] libmachine: (addons-093588) <serial type='pty'>
I1202 11:30:38.417858 14046 main.go:141] libmachine: (addons-093588) <target port='0'/>
I1202 11:30:38.417868 14046 main.go:141] libmachine: (addons-093588) </serial>
I1202 11:30:38.417882 14046 main.go:141] libmachine: (addons-093588) <console type='pty'>
I1202 11:30:38.417900 14046 main.go:141] libmachine: (addons-093588) <target type='serial' port='0'/>
I1202 11:30:38.417909 14046 main.go:141] libmachine: (addons-093588) </console>
I1202 11:30:38.417919 14046 main.go:141] libmachine: (addons-093588) <rng model='virtio'>
I1202 11:30:38.417930 14046 main.go:141] libmachine: (addons-093588) <backend model='random'>/dev/random</backend>
I1202 11:30:38.417942 14046 main.go:141] libmachine: (addons-093588) </rng>
I1202 11:30:38.417955 14046 main.go:141] libmachine: (addons-093588)
I1202 11:30:38.417965 14046 main.go:141] libmachine: (addons-093588)
I1202 11:30:38.417974 14046 main.go:141] libmachine: (addons-093588) </devices>
I1202 11:30:38.417982 14046 main.go:141] libmachine: (addons-093588) </domain>
I1202 11:30:38.417991 14046 main.go:141] libmachine: (addons-093588)
I1202 11:30:38.423153 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:41:86:b0 in network default
I1202 11:30:38.423632 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:30:38.423650 14046 main.go:141] libmachine: (addons-093588) Ensuring networks are active...
I1202 11:30:38.424163 14046 main.go:141] libmachine: (addons-093588) Ensuring network default is active
I1202 11:30:38.424413 14046 main.go:141] libmachine: (addons-093588) Ensuring network mk-addons-093588 is active
I1202 11:30:38.424831 14046 main.go:141] libmachine: (addons-093588) Getting domain xml...
I1202 11:30:38.425386 14046 main.go:141] libmachine: (addons-093588) Creating domain...
I1202 11:30:39.768153 14046 main.go:141] libmachine: (addons-093588) Waiting to get IP...
I1202 11:30:39.769048 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:30:39.769406 14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
I1202 11:30:39.769434 14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:39.769391 14068 retry.go:31] will retry after 262.465444ms: waiting for machine to come up
I1202 11:30:40.033678 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:30:40.034019 14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
I1202 11:30:40.034047 14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:40.033987 14068 retry.go:31] will retry after 268.465291ms: waiting for machine to come up
I1202 11:30:40.304474 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:30:40.304856 14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
I1202 11:30:40.304886 14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:40.304845 14068 retry.go:31] will retry after 459.329717ms: waiting for machine to come up
I1202 11:30:40.765148 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:30:40.765539 14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
I1202 11:30:40.765576 14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:40.765500 14068 retry.go:31] will retry after 473.589572ms: waiting for machine to come up
I1202 11:30:41.241029 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:30:41.241356 14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
I1202 11:30:41.241402 14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:41.241309 14068 retry.go:31] will retry after 489.24768ms: waiting for machine to come up
I1202 11:30:41.732001 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:30:41.732402 14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
I1202 11:30:41.732428 14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:41.732337 14068 retry.go:31] will retry after 764.713135ms: waiting for machine to come up
I1202 11:30:42.498043 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:30:42.498440 14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
I1202 11:30:42.498462 14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:42.498418 14068 retry.go:31] will retry after 1.105216684s: waiting for machine to come up
I1202 11:30:43.605335 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:30:43.605759 14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
I1202 11:30:43.605784 14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:43.605714 14068 retry.go:31] will retry after 1.334125941s: waiting for machine to come up
I1202 11:30:44.942153 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:30:44.942579 14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
I1202 11:30:44.942604 14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:44.942535 14068 retry.go:31] will retry after 1.384283544s: waiting for machine to come up
I1202 11:30:46.329052 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:30:46.329455 14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
I1202 11:30:46.329485 14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:46.329405 14068 retry.go:31] will retry after 1.997806074s: waiting for machine to come up
I1202 11:30:48.328389 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:30:48.328833 14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
I1202 11:30:48.328861 14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:48.328789 14068 retry.go:31] will retry after 2.344508632s: waiting for machine to come up
I1202 11:30:50.676551 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:30:50.676981 14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
I1202 11:30:50.677010 14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:50.676934 14068 retry.go:31] will retry after 3.069367748s: waiting for machine to come up
I1202 11:30:53.748570 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:30:53.748926 14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
I1202 11:30:53.748950 14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:53.748888 14068 retry.go:31] will retry after 2.996899134s: waiting for machine to come up
I1202 11:30:56.749121 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:30:56.749572 14046 main.go:141] libmachine: (addons-093588) DBG | unable to find current IP address of domain addons-093588 in network mk-addons-093588
I1202 11:30:56.749597 14046 main.go:141] libmachine: (addons-093588) DBG | I1202 11:30:56.749520 14068 retry.go:31] will retry after 4.228069851s: waiting for machine to come up
I1202 11:31:00.981506 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:00.981936 14046 main.go:141] libmachine: (addons-093588) Found IP for machine: 192.168.39.203
I1202 11:31:00.981958 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has current primary IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:00.981964 14046 main.go:141] libmachine: (addons-093588) Reserving static IP address...
I1202 11:31:00.982295 14046 main.go:141] libmachine: (addons-093588) DBG | unable to find host DHCP lease matching {name: "addons-093588", mac: "52:54:00:8a:ff:d0", ip: "192.168.39.203"} in network mk-addons-093588
I1202 11:31:01.048415 14046 main.go:141] libmachine: (addons-093588) DBG | Getting to WaitForSSH function...
I1202 11:31:01.048442 14046 main.go:141] libmachine: (addons-093588) Reserved static IP address: 192.168.39.203
I1202 11:31:01.048454 14046 main.go:141] libmachine: (addons-093588) Waiting for SSH to be available...
I1202 11:31:01.051059 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:01.051438 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:01.051472 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:01.051619 14046 main.go:141] libmachine: (addons-093588) DBG | Using SSH client type: external
I1202 11:31:01.051638 14046 main.go:141] libmachine: (addons-093588) DBG | Using SSH private key: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa (-rw-------)
I1202 11:31:01.051663 14046 main.go:141] libmachine: (addons-093588) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.203 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa -p 22] /usr/bin/ssh <nil>}
I1202 11:31:01.051673 14046 main.go:141] libmachine: (addons-093588) DBG | About to run SSH command:
I1202 11:31:01.051681 14046 main.go:141] libmachine: (addons-093588) DBG | exit 0
I1202 11:31:01.179840 14046 main.go:141] libmachine: (addons-093588) DBG | SSH cmd err, output: <nil>:
I1202 11:31:01.180069 14046 main.go:141] libmachine: (addons-093588) KVM machine creation complete!
I1202 11:31:01.180372 14046 main.go:141] libmachine: (addons-093588) Calling .GetConfigRaw
I1202 11:31:01.181030 14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
I1202 11:31:01.181223 14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
I1202 11:31:01.181368 14046 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
I1202 11:31:01.181383 14046 main.go:141] libmachine: (addons-093588) Calling .GetState
I1202 11:31:01.182471 14046 main.go:141] libmachine: Detecting operating system of created instance...
I1202 11:31:01.182489 14046 main.go:141] libmachine: Waiting for SSH to be available...
I1202 11:31:01.182497 14046 main.go:141] libmachine: Getting to WaitForSSH function...
I1202 11:31:01.182504 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
I1202 11:31:01.184526 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:01.184815 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:01.184837 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:01.184948 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
I1202 11:31:01.185116 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:01.185254 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:01.185411 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
I1202 11:31:01.185565 14046 main.go:141] libmachine: Using SSH client type: native
I1202 11:31:01.185779 14046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil> [] 0s} 192.168.39.203 22 <nil> <nil>}
I1202 11:31:01.185793 14046 main.go:141] libmachine: About to run SSH command:
exit 0
I1202 11:31:01.282941 14046 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1202 11:31:01.282962 14046 main.go:141] libmachine: Detecting the provisioner...
I1202 11:31:01.282973 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
I1202 11:31:01.285619 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:01.285952 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:01.285985 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:01.286145 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
I1202 11:31:01.286305 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:01.286461 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:01.286576 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
I1202 11:31:01.286731 14046 main.go:141] libmachine: Using SSH client type: native
I1202 11:31:01.286920 14046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil> [] 0s} 192.168.39.203 22 <nil> <nil>}
I1202 11:31:01.286931 14046 main.go:141] libmachine: About to run SSH command:
cat /etc/os-release
I1202 11:31:01.388462 14046 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2023.02.9-dirty
ID=buildroot
VERSION_ID=2023.02.9
PRETTY_NAME="Buildroot 2023.02.9"
I1202 11:31:01.388501 14046 main.go:141] libmachine: found compatible host: buildroot
I1202 11:31:01.388507 14046 main.go:141] libmachine: Provisioning with buildroot...
I1202 11:31:01.388512 14046 main.go:141] libmachine: (addons-093588) Calling .GetMachineName
I1202 11:31:01.388677 14046 buildroot.go:166] provisioning hostname "addons-093588"
I1202 11:31:01.388696 14046 main.go:141] libmachine: (addons-093588) Calling .GetMachineName
I1202 11:31:01.388841 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
I1202 11:31:01.391137 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:01.391506 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:01.391534 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:01.391652 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
I1202 11:31:01.391816 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:01.391965 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:01.392102 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
I1202 11:31:01.392246 14046 main.go:141] libmachine: Using SSH client type: native
I1202 11:31:01.392391 14046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil> [] 0s} 192.168.39.203 22 <nil> <nil>}
I1202 11:31:01.392402 14046 main.go:141] libmachine: About to run SSH command:
sudo hostname addons-093588 && echo "addons-093588" | sudo tee /etc/hostname
I1202 11:31:01.506202 14046 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-093588
I1202 11:31:01.506240 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
I1202 11:31:01.509060 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:01.509411 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:01.509432 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:01.509608 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
I1202 11:31:01.509804 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:01.509958 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:01.510079 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
I1202 11:31:01.510222 14046 main.go:141] libmachine: Using SSH client type: native
I1202 11:31:01.510393 14046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil> [] 0s} 192.168.39.203 22 <nil> <nil>}
I1202 11:31:01.510415 14046 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-093588' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-093588/g' /etc/hosts;
else
echo '127.0.1.1 addons-093588' | sudo tee -a /etc/hosts;
fi
fi
I1202 11:31:01.616311 14046 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1202 11:31:01.616347 14046 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20033-6257/.minikube CaCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20033-6257/.minikube}
I1202 11:31:01.616393 14046 buildroot.go:174] setting up certificates
I1202 11:31:01.616410 14046 provision.go:84] configureAuth start
I1202 11:31:01.616430 14046 main.go:141] libmachine: (addons-093588) Calling .GetMachineName
I1202 11:31:01.616682 14046 main.go:141] libmachine: (addons-093588) Calling .GetIP
I1202 11:31:01.619505 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:01.620156 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:01.620182 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:01.620327 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
I1202 11:31:01.622275 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:01.622543 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:01.622570 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:01.622692 14046 provision.go:143] copyHostCerts
I1202 11:31:01.622767 14046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/ca.pem (1082 bytes)
I1202 11:31:01.622899 14046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/cert.pem (1123 bytes)
I1202 11:31:01.622955 14046 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20033-6257/.minikube/key.pem (1679 bytes)
I1202 11:31:01.623001 14046 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem org=jenkins.addons-093588 san=[127.0.0.1 192.168.39.203 addons-093588 localhost minikube]
I1202 11:31:01.923775 14046 provision.go:177] copyRemoteCerts
I1202 11:31:01.923832 14046 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1202 11:31:01.923854 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
I1202 11:31:01.926193 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:01.926521 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:01.926551 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:01.926687 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
I1202 11:31:01.926841 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:01.926972 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
I1202 11:31:01.927075 14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
I1202 11:31:02.005579 14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1202 11:31:02.029137 14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1202 11:31:02.051665 14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1202 11:31:02.074035 14046 provision.go:87] duration metric: took 457.609565ms to configureAuth
I1202 11:31:02.074059 14046 buildroot.go:189] setting minikube options for container-runtime
I1202 11:31:02.074217 14046 config.go:182] Loaded profile config "addons-093588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1202 11:31:02.074283 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
I1202 11:31:02.076631 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:02.076987 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:02.077013 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:02.077164 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
I1202 11:31:02.077336 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:02.077492 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:02.077615 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
I1202 11:31:02.077760 14046 main.go:141] libmachine: Using SSH client type: native
I1202 11:31:02.077906 14046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil> [] 0s} 192.168.39.203 22 <nil> <nil>}
I1202 11:31:02.077920 14046 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1202 11:31:02.287644 14046 main.go:141] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1202 11:31:02.287666 14046 main.go:141] libmachine: Checking connection to Docker...
I1202 11:31:02.287672 14046 main.go:141] libmachine: (addons-093588) Calling .GetURL
I1202 11:31:02.288858 14046 main.go:141] libmachine: (addons-093588) DBG | Using libvirt version 6000000
I1202 11:31:02.290750 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:02.291050 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:02.291080 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:02.291195 14046 main.go:141] libmachine: Docker is up and running!
I1202 11:31:02.291216 14046 main.go:141] libmachine: Reticulating splines...
I1202 11:31:02.291222 14046 client.go:171] duration metric: took 24.757312526s to LocalClient.Create
I1202 11:31:02.291244 14046 start.go:167] duration metric: took 24.757374154s to libmachine.API.Create "addons-093588"
I1202 11:31:02.291261 14046 start.go:293] postStartSetup for "addons-093588" (driver="kvm2")
I1202 11:31:02.291272 14046 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1202 11:31:02.291288 14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
I1202 11:31:02.291502 14046 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1202 11:31:02.291522 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
I1202 11:31:02.293349 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:02.293594 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:02.293619 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:02.293743 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
I1202 11:31:02.293886 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:02.294032 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
I1202 11:31:02.294145 14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
I1202 11:31:02.373911 14046 ssh_runner.go:195] Run: cat /etc/os-release
I1202 11:31:02.378111 14046 info.go:137] Remote host: Buildroot 2023.02.9
I1202 11:31:02.378132 14046 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/addons for local assets ...
I1202 11:31:02.378192 14046 filesync.go:126] Scanning /home/jenkins/minikube-integration/20033-6257/.minikube/files for local assets ...
I1202 11:31:02.378214 14046 start.go:296] duration metric: took 86.945972ms for postStartSetup
I1202 11:31:02.378245 14046 main.go:141] libmachine: (addons-093588) Calling .GetConfigRaw
I1202 11:31:02.378753 14046 main.go:141] libmachine: (addons-093588) Calling .GetIP
I1202 11:31:02.380981 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:02.381316 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:02.381361 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:02.381564 14046 profile.go:143] Saving config to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/config.json ...
I1202 11:31:02.381722 14046 start.go:128] duration metric: took 24.864904519s to createHost
I1202 11:31:02.381743 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
I1202 11:31:02.383934 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:02.384272 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:02.384314 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:02.384473 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
I1202 11:31:02.384686 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:02.384826 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:02.384934 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
I1202 11:31:02.385083 14046 main.go:141] libmachine: Using SSH client type: native
I1202 11:31:02.385236 14046 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil> [] 0s} 192.168.39.203 22 <nil> <nil>}
I1202 11:31:02.385245 14046 main.go:141] libmachine: About to run SSH command:
date +%s.%N
I1202 11:31:02.488569 14046 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733139062.461766092
I1202 11:31:02.488586 14046 fix.go:216] guest clock: 1733139062.461766092
I1202 11:31:02.488594 14046 fix.go:229] Guest: 2024-12-02 11:31:02.461766092 +0000 UTC Remote: 2024-12-02 11:31:02.381733026 +0000 UTC m=+24.960080527 (delta=80.033066ms)
I1202 11:31:02.488611 14046 fix.go:200] guest clock delta is within tolerance: 80.033066ms
I1202 11:31:02.488616 14046 start.go:83] releasing machines lock for "addons-093588", held for 24.971869861s
I1202 11:31:02.488633 14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
I1202 11:31:02.488804 14046 main.go:141] libmachine: (addons-093588) Calling .GetIP
I1202 11:31:02.491410 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:02.491718 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:02.491740 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:02.491912 14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
I1202 11:31:02.492303 14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
I1202 11:31:02.492498 14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
I1202 11:31:02.492599 14046 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1202 11:31:02.492640 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
I1202 11:31:02.492659 14046 ssh_runner.go:195] Run: cat /version.json
I1202 11:31:02.492682 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
I1202 11:31:02.495098 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:02.495504 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:02.495523 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:02.495555 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:02.495683 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
I1202 11:31:02.495836 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:02.495981 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
I1202 11:31:02.496036 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:02.496054 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:02.496127 14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
I1202 11:31:02.496309 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
I1202 11:31:02.496461 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:02.496596 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
I1202 11:31:02.496733 14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
I1202 11:31:02.593747 14046 ssh_runner.go:195] Run: systemctl --version
I1202 11:31:02.599449 14046 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1202 11:31:02.754591 14046 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1202 11:31:02.760318 14046 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1202 11:31:02.760381 14046 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1202 11:31:02.775654 14046 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1202 11:31:02.775672 14046 start.go:495] detecting cgroup driver to use...
I1202 11:31:02.775730 14046 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1202 11:31:02.790974 14046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1202 11:31:02.803600 14046 docker.go:217] disabling cri-docker service (if available) ...
I1202 11:31:02.803656 14046 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1202 11:31:02.816048 14046 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1202 11:31:02.828952 14046 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1202 11:31:02.939245 14046 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1202 11:31:03.103186 14046 docker.go:233] disabling docker service ...
I1202 11:31:03.103247 14046 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1202 11:31:03.117174 14046 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1202 11:31:03.129365 14046 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1202 11:31:03.241601 14046 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1202 11:31:03.354550 14046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1202 11:31:03.368814 14046 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1202 11:31:03.387288 14046 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
I1202 11:31:03.387336 14046 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
I1202 11:31:03.397743 14046 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1202 11:31:03.397802 14046 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1202 11:31:03.408206 14046 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1202 11:31:03.418070 14046 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1202 11:31:03.428088 14046 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1202 11:31:03.438226 14046 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1202 11:31:03.448028 14046 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1202 11:31:03.464548 14046 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1202 11:31:03.474482 14046 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1202 11:31:03.483342 14046 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1202 11:31:03.483384 14046 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1202 11:31:03.495424 14046 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1202 11:31:03.504365 14046 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1202 11:31:03.616131 14046 ssh_runner.go:195] Run: sudo systemctl restart crio
I1202 11:31:03.806820 14046 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
I1202 11:31:03.806906 14046 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1202 11:31:03.811970 14046 start.go:563] Will wait 60s for crictl version
I1202 11:31:03.812015 14046 ssh_runner.go:195] Run: which crictl
I1202 11:31:03.815656 14046 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1202 11:31:03.854668 14046 start.go:579] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1202 11:31:03.854771 14046 ssh_runner.go:195] Run: crio --version
I1202 11:31:03.883503 14046 ssh_runner.go:195] Run: crio --version
I1202 11:31:03.943735 14046 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
I1202 11:31:03.978507 14046 main.go:141] libmachine: (addons-093588) Calling .GetIP
I1202 11:31:03.981079 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:03.981440 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:03.981469 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:03.981694 14046 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1202 11:31:03.986029 14046 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1202 11:31:03.999160 14046 kubeadm.go:883] updating cluster {Name:addons-093588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-093588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1202 11:31:03.999273 14046 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1202 11:31:03.999318 14046 ssh_runner.go:195] Run: sudo crictl images --output json
I1202 11:31:04.032753 14046 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
I1202 11:31:04.032848 14046 ssh_runner.go:195] Run: which lz4
I1202 11:31:04.036732 14046 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1202 11:31:04.040941 14046 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1202 11:31:04.040969 14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
I1202 11:31:05.312874 14046 crio.go:462] duration metric: took 1.276172912s to copy over tarball
I1202 11:31:05.312957 14046 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1202 11:31:07.438469 14046 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.125483138s)
I1202 11:31:07.438502 14046 crio.go:469] duration metric: took 2.125592032s to extract the tarball
I1202 11:31:07.438513 14046 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1202 11:31:07.475913 14046 ssh_runner.go:195] Run: sudo crictl images --output json
I1202 11:31:07.526664 14046 crio.go:514] all images are preloaded for cri-o runtime.
I1202 11:31:07.526685 14046 cache_images.go:84] Images are preloaded, skipping loading
I1202 11:31:07.526695 14046 kubeadm.go:934] updating node { 192.168.39.203 8443 v1.31.2 crio true true} ...
I1202 11:31:07.526796 14046 kubeadm.go:946] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-093588 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.203
[Install]
config:
{KubernetesVersion:v1.31.2 ClusterName:addons-093588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1202 11:31:07.526870 14046 ssh_runner.go:195] Run: crio config
I1202 11:31:07.582564 14046 cni.go:84] Creating CNI manager for ""
I1202 11:31:07.582584 14046 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1202 11:31:07.582593 14046 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I1202 11:31:07.582614 14046 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.203 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-093588 NodeName:addons-093588 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1202 11:31:07.582727 14046 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.203
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-093588"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.203"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.203"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.31.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1202 11:31:07.582780 14046 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
I1202 11:31:07.592378 14046 binaries.go:44] Found k8s binaries, skipping transfer
I1202 11:31:07.592421 14046 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1202 11:31:07.601397 14046 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
I1202 11:31:07.617029 14046 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1202 11:31:07.632123 14046 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
I1202 11:31:07.647544 14046 ssh_runner.go:195] Run: grep 192.168.39.203 control-plane.minikube.internal$ /etc/hosts
I1202 11:31:07.651140 14046 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.203 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1202 11:31:07.662518 14046 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1202 11:31:07.774786 14046 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1202 11:31:07.795670 14046 certs.go:68] Setting up /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588 for IP: 192.168.39.203
I1202 11:31:07.795689 14046 certs.go:194] generating shared ca certs ...
I1202 11:31:07.795704 14046 certs.go:226] acquiring lock for ca certs: {Name:mkd90d864427c88c2207fea7caea2d2f5fdfaac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 11:31:07.795860 14046 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key
I1202 11:31:07.881230 14046 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt ...
I1202 11:31:07.881255 14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt: {Name:mkb25dcf874cc76262dd87f7954dc5def047ba80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 11:31:07.881433 14046 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key ...
I1202 11:31:07.881447 14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key: {Name:mk24aaecfce06715328a2e1bdf78912e66e577e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 11:31:07.881546 14046 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key
I1202 11:31:08.066592 14046 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt ...
I1202 11:31:08.066617 14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt: {Name:mk353521566f5b511b2c49b5facbb9d7e8a55579 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 11:31:08.066785 14046 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key ...
I1202 11:31:08.066799 14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key: {Name:mk0fae51faecacd368a9e9845e8ec1cc10ac1c05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 11:31:08.066891 14046 certs.go:256] generating profile certs ...
I1202 11:31:08.066943 14046 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.key
I1202 11:31:08.066963 14046 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt with IP's: []
I1202 11:31:08.199504 14046 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt ...
I1202 11:31:08.199534 14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.crt: {Name:mke09ad3d888dc6da1ff7604f62658a689c18924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 11:31:08.199693 14046 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.key ...
I1202 11:31:08.199704 14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/client.key: {Name:mk29de8a87eafaedfa0731583b4b03810c89d586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 11:31:08.199771 14046 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.key.522ccd78
I1202 11:31:08.199789 14046 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.crt.522ccd78 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.203]
I1202 11:31:08.366826 14046 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.crt.522ccd78 ...
I1202 11:31:08.366857 14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.crt.522ccd78: {Name:mkccb5564ec2f6a186fbab8f5cb67d658caada7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 11:31:08.367032 14046 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.key.522ccd78 ...
I1202 11:31:08.367046 14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.key.522ccd78: {Name:mk589a09a46c7953a1cc24cad0c706bf9dfb6e43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 11:31:08.367125 14046 certs.go:381] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.crt.522ccd78 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.crt
I1202 11:31:08.367205 14046 certs.go:385] copying /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.key.522ccd78 -> /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.key
I1202 11:31:08.367257 14046 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/proxy-client.key
I1202 11:31:08.367277 14046 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/proxy-client.crt with IP's: []
I1202 11:31:08.450648 14046 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/proxy-client.crt ...
I1202 11:31:08.450679 14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/proxy-client.crt: {Name:mke55f3a980df3599f606cdcab7f35740d5da41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 11:31:08.450843 14046 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/proxy-client.key ...
I1202 11:31:08.450854 14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/proxy-client.key: {Name:mk944302d6559b5e702f266fc95edf52b4fa7b48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 11:31:08.451514 14046 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca-key.pem (1675 bytes)
I1202 11:31:08.451556 14046 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/ca.pem (1082 bytes)
I1202 11:31:08.451584 14046 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/cert.pem (1123 bytes)
I1202 11:31:08.451613 14046 certs.go:484] found cert: /home/jenkins/minikube-integration/20033-6257/.minikube/certs/key.pem (1679 bytes)
I1202 11:31:08.452184 14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1202 11:31:08.489217 14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1202 11:31:08.517562 14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1202 11:31:08.543285 14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1202 11:31:08.569173 14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1202 11:31:08.594856 14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1202 11:31:08.620538 14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1202 11:31:08.645986 14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/profiles/addons-093588/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1202 11:31:08.668456 14046 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20033-6257/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1202 11:31:08.690521 14046 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1202 11:31:08.706980 14046 ssh_runner.go:195] Run: openssl version
I1202 11:31:08.712732 14046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1202 11:31:08.723032 14046 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1202 11:31:08.727495 14046 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 2 11:31 /usr/share/ca-certificates/minikubeCA.pem
I1202 11:31:08.727526 14046 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1202 11:31:08.733224 14046 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1202 11:31:08.743680 14046 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1202 11:31:08.747664 14046 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1202 11:31:08.747708 14046 kubeadm.go:392] StartCluster: {Name:addons-093588 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-093588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1202 11:31:08.747775 14046 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1202 11:31:08.747814 14046 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1202 11:31:08.787888 14046 cri.go:89] found id: ""
I1202 11:31:08.787950 14046 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1202 11:31:08.797362 14046 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1202 11:31:08.809451 14046 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1202 11:31:08.820282 14046 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1202 11:31:08.820297 14046 kubeadm.go:157] found existing configuration files:
I1202 11:31:08.820333 14046 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1202 11:31:08.828677 14046 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1202 11:31:08.828711 14046 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1202 11:31:08.837308 14046 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1202 11:31:08.845525 14046 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1202 11:31:08.845558 14046 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1202 11:31:08.854180 14046 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1202 11:31:08.862334 14046 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1202 11:31:08.862373 14046 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1202 11:31:08.871107 14046 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1202 11:31:08.879485 14046 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1202 11:31:08.879516 14046 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1202 11:31:08.888193 14046 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1202 11:31:09.042807 14046 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1202 11:31:19.640576 14046 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
I1202 11:31:19.640684 14046 kubeadm.go:310] [preflight] Running pre-flight checks
I1202 11:31:19.640804 14046 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I1202 11:31:19.640929 14046 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1202 11:31:19.641054 14046 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1202 11:31:19.641154 14046 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1202 11:31:19.642488 14046 out.go:235] - Generating certificates and keys ...
I1202 11:31:19.642574 14046 kubeadm.go:310] [certs] Using existing ca certificate authority
I1202 11:31:19.642657 14046 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I1202 11:31:19.642746 14046 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I1202 11:31:19.642837 14046 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I1202 11:31:19.642899 14046 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I1202 11:31:19.642942 14046 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I1202 11:31:19.642987 14046 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I1202 11:31:19.643101 14046 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-093588 localhost] and IPs [192.168.39.203 127.0.0.1 ::1]
I1202 11:31:19.643167 14046 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I1202 11:31:19.643335 14046 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-093588 localhost] and IPs [192.168.39.203 127.0.0.1 ::1]
I1202 11:31:19.643411 14046 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I1202 11:31:19.643467 14046 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I1202 11:31:19.643523 14046 kubeadm.go:310] [certs] Generating "sa" key and public key
I1202 11:31:19.643615 14046 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1202 11:31:19.643692 14046 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I1202 11:31:19.643782 14046 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1202 11:31:19.643865 14046 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1202 11:31:19.643943 14046 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1202 11:31:19.643993 14046 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1202 11:31:19.644064 14046 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1202 11:31:19.644161 14046 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1202 11:31:19.645478 14046 out.go:235] - Booting up control plane ...
I1202 11:31:19.645576 14046 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1202 11:31:19.645645 14046 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1202 11:31:19.645701 14046 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1202 11:31:19.645795 14046 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1202 11:31:19.645918 14046 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1202 11:31:19.645986 14046 kubeadm.go:310] [kubelet-start] Starting the kubelet
I1202 11:31:19.646132 14046 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1202 11:31:19.646250 14046 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1202 11:31:19.646306 14046 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.795667ms
I1202 11:31:19.646374 14046 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I1202 11:31:19.646429 14046 kubeadm.go:310] [api-check] The API server is healthy after 5.502118168s
I1202 11:31:19.646515 14046 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1202 11:31:19.646628 14046 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1202 11:31:19.646678 14046 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I1202 11:31:19.646851 14046 kubeadm.go:310] [mark-control-plane] Marking the node addons-093588 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1202 11:31:19.646906 14046 kubeadm.go:310] [bootstrap-token] Using token: 1k1sz6.8l7j2y5vp52tcjwr
I1202 11:31:19.648784 14046 out.go:235] - Configuring RBAC rules ...
I1202 11:31:19.648889 14046 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1202 11:31:19.648963 14046 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1202 11:31:19.649092 14046 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1202 11:31:19.649233 14046 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1202 11:31:19.649389 14046 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1202 11:31:19.649475 14046 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1202 11:31:19.649614 14046 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1202 11:31:19.649654 14046 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I1202 11:31:19.649717 14046 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I1202 11:31:19.649728 14046 kubeadm.go:310]
I1202 11:31:19.649818 14046 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I1202 11:31:19.649829 14046 kubeadm.go:310]
I1202 11:31:19.649939 14046 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I1202 11:31:19.649949 14046 kubeadm.go:310]
I1202 11:31:19.649985 14046 kubeadm.go:310] mkdir -p $HOME/.kube
I1202 11:31:19.650037 14046 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1202 11:31:19.650080 14046 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1202 11:31:19.650086 14046 kubeadm.go:310]
I1202 11:31:19.650138 14046 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I1202 11:31:19.650144 14046 kubeadm.go:310]
I1202 11:31:19.650183 14046 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I1202 11:31:19.650189 14046 kubeadm.go:310]
I1202 11:31:19.650234 14046 kubeadm.go:310] You should now deploy a pod network to the cluster.
I1202 11:31:19.650304 14046 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1202 11:31:19.650365 14046 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1202 11:31:19.650373 14046 kubeadm.go:310]
I1202 11:31:19.650440 14046 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I1202 11:31:19.650514 14046 kubeadm.go:310] and service account keys on each node and then running the following as root:
I1202 11:31:19.650522 14046 kubeadm.go:310]
I1202 11:31:19.650595 14046 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1k1sz6.8l7j2y5vp52tcjwr \
I1202 11:31:19.650697 14046 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb \
I1202 11:31:19.650725 14046 kubeadm.go:310] --control-plane
I1202 11:31:19.650735 14046 kubeadm.go:310]
I1202 11:31:19.650849 14046 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I1202 11:31:19.650859 14046 kubeadm.go:310]
I1202 11:31:19.650970 14046 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1k1sz6.8l7j2y5vp52tcjwr \
I1202 11:31:19.651080 14046 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:9ebc90a8ea81482c2fee6e485f0934fa91cca27bad5f3e63f2929fbd27f9dacb
I1202 11:31:19.651101 14046 cni.go:84] Creating CNI manager for ""
I1202 11:31:19.651113 14046 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1202 11:31:19.652324 14046 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
I1202 11:31:19.653350 14046 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1202 11:31:19.663856 14046 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1202 11:31:19.683635 14046 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1202 11:31:19.683704 14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1202 11:31:19.683726 14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-093588 minikube.k8s.io/updated_at=2024_12_02T11_31_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8 minikube.k8s.io/name=addons-093588 minikube.k8s.io/primary=true
I1202 11:31:19.809186 14046 ops.go:34] apiserver oom_adj: -16
I1202 11:31:19.809308 14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1202 11:31:20.309679 14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1202 11:31:20.809603 14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1202 11:31:21.310155 14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1202 11:31:21.809913 14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1202 11:31:22.310060 14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1202 11:31:22.809479 14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1202 11:31:23.309701 14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1202 11:31:23.809394 14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1202 11:31:24.310391 14046 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1202 11:31:24.414631 14046 kubeadm.go:1113] duration metric: took 4.730985398s to wait for elevateKubeSystemPrivileges
I1202 11:31:24.414668 14046 kubeadm.go:394] duration metric: took 15.666963518s to StartCluster
I1202 11:31:24.414689 14046 settings.go:142] acquiring lock: {Name:mk6e54e0b760aaf8e7bc9405f73b452b566ab7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 11:31:24.414816 14046 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20033-6257/kubeconfig
I1202 11:31:24.415263 14046 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20033-6257/kubeconfig: {Name:mkbf386db1f0c2ae2c08c1106fe6101226787e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1202 11:31:24.415607 14046 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.203 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
I1202 11:31:24.415637 14046 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1202 11:31:24.415683 14046 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1202 11:31:24.415803 14046 addons.go:69] Setting inspektor-gadget=true in profile "addons-093588"
I1202 11:31:24.415812 14046 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-093588"
I1202 11:31:24.415821 14046 addons.go:234] Setting addon inspektor-gadget=true in "addons-093588"
I1202 11:31:24.415825 14046 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-093588"
I1202 11:31:24.415833 14046 addons.go:69] Setting storage-provisioner=true in profile "addons-093588"
I1202 11:31:24.415851 14046 host.go:66] Checking if "addons-093588" exists ...
I1202 11:31:24.415862 14046 addons.go:234] Setting addon storage-provisioner=true in "addons-093588"
I1202 11:31:24.415871 14046 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-093588"
I1202 11:31:24.415888 14046 config.go:182] Loaded profile config "addons-093588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1202 11:31:24.415899 14046 host.go:66] Checking if "addons-093588" exists ...
I1202 11:31:24.415899 14046 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-093588"
I1202 11:31:24.415926 14046 host.go:66] Checking if "addons-093588" exists ...
I1202 11:31:24.415938 14046 addons.go:69] Setting metrics-server=true in profile "addons-093588"
I1202 11:31:24.415951 14046 addons.go:234] Setting addon metrics-server=true in "addons-093588"
I1202 11:31:24.415975 14046 host.go:66] Checking if "addons-093588" exists ...
I1202 11:31:24.415801 14046 addons.go:69] Setting yakd=true in profile "addons-093588"
I1202 11:31:24.416328 14046 addons.go:69] Setting volcano=true in profile "addons-093588"
I1202 11:31:24.416332 14046 addons.go:234] Setting addon yakd=true in "addons-093588"
I1202 11:31:24.416340 14046 addons.go:234] Setting addon volcano=true in "addons-093588"
I1202 11:31:24.416348 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.416362 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.416364 14046 addons.go:69] Setting volumesnapshots=true in profile "addons-093588"
I1202 11:31:24.416354 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.416376 14046 addons.go:234] Setting addon volumesnapshots=true in "addons-093588"
I1202 11:31:24.416348 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.416391 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.416392 14046 host.go:66] Checking if "addons-093588" exists ...
I1202 11:31:24.416393 14046 addons.go:69] Setting registry=true in profile "addons-093588"
I1202 11:31:24.416405 14046 addons.go:234] Setting addon registry=true in "addons-093588"
I1202 11:31:24.416412 14046 addons.go:69] Setting cloud-spanner=true in profile "addons-093588"
I1202 11:31:24.416416 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.416426 14046 addons.go:234] Setting addon cloud-spanner=true in "addons-093588"
I1202 11:31:24.416405 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.416426 14046 host.go:66] Checking if "addons-093588" exists ...
I1202 11:31:24.416450 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.416483 14046 addons.go:69] Setting gcp-auth=true in profile "addons-093588"
I1202 11:31:24.416504 14046 addons.go:69] Setting ingress=true in profile "addons-093588"
I1202 11:31:24.416357 14046 host.go:66] Checking if "addons-093588" exists ...
I1202 11:31:24.416517 14046 addons.go:234] Setting addon ingress=true in "addons-093588"
I1202 11:31:24.416519 14046 mustload.go:65] Loading cluster: addons-093588
I1202 11:31:24.416532 14046 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-093588"
I1202 11:31:24.416560 14046 addons.go:69] Setting default-storageclass=true in profile "addons-093588"
I1202 11:31:24.416586 14046 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-093588"
I1202 11:31:24.416590 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.416632 14046 addons.go:69] Setting ingress-dns=true in profile "addons-093588"
I1202 11:31:24.416650 14046 addons.go:234] Setting addon ingress-dns=true in "addons-093588"
I1202 11:31:24.416682 14046 host.go:66] Checking if "addons-093588" exists ...
I1202 11:31:24.416780 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.416807 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.416363 14046 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-093588"
I1202 11:31:24.416859 14046 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-093588"
I1202 11:31:24.416872 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.416890 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.416900 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.416439 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.416920 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.416352 14046 host.go:66] Checking if "addons-093588" exists ...
I1202 11:31:24.416565 14046 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-093588"
I1202 11:31:24.416997 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.417021 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.417031 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.417043 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.417073 14046 host.go:66] Checking if "addons-093588" exists ...
I1202 11:31:24.417115 14046 config.go:182] Loaded profile config "addons-093588": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1202 11:31:24.417236 14046 host.go:66] Checking if "addons-093588" exists ...
I1202 11:31:24.417280 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.417310 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.417478 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.417508 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.417581 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.417639 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.417599 14046 host.go:66] Checking if "addons-093588" exists ...
I1202 11:31:24.417818 14046 host.go:66] Checking if "addons-093588" exists ...
I1202 11:31:24.418318 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.418354 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.418457 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.418520 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.419789 14046 out.go:177] * Verifying Kubernetes components...
I1202 11:31:24.421204 14046 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1202 11:31:24.434933 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
I1202 11:31:24.456366 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35737
I1202 11:31:24.456379 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36237
I1202 11:31:24.456385 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34851
I1202 11:31:24.456522 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41109
I1202 11:31:24.457088 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.457138 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.457205 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.458101 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.458260 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.458270 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.458324 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.458368 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.457145 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.458977 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.458999 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.459125 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.459137 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.459192 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.459310 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.459320 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.459719 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.459746 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.461092 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.461110 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.461162 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.461200 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.461239 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.465673 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.466074 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.466128 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.466547 14046 main.go:141] libmachine: (addons-093588) Calling .GetState
I1202 11:31:24.466716 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.466745 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.466905 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.466931 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.471312 14046 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-093588"
I1202 11:31:24.471365 14046 host.go:66] Checking if "addons-093588" exists ...
I1202 11:31:24.471748 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.471776 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.496148 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46685
I1202 11:31:24.496823 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.496855 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35855
I1202 11:31:24.497309 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.497323 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.497591 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.497750 14046 main.go:141] libmachine: (addons-093588) Calling .GetState
I1202 11:31:24.497825 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.498357 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.498378 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.498441 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40755
I1202 11:31:24.498718 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41033
I1202 11:31:24.498914 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.499256 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.499500 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.499512 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.500301 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.500723 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.500766 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.500873 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.500903 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.500931 14046 addons.go:234] Setting addon default-storageclass=true in "addons-093588"
I1202 11:31:24.501152 14046 host.go:66] Checking if "addons-093588" exists ...
I1202 11:31:24.501175 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.501511 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.501552 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.501848 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.501865 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.502157 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46689
I1202 11:31:24.502338 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.502394 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35403
I1202 11:31:24.504903 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35063
I1202 11:31:24.504935 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.504948 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34279
I1202 11:31:24.504909 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46127
I1202 11:31:24.505440 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.505449 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.505787 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.505960 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.505975 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.506128 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.506144 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.506220 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35429
I1202 11:31:24.506436 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.506455 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.506465 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.506589 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.506809 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.506836 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.506857 14046 main.go:141] libmachine: (addons-093588) Calling .GetState
I1202 11:31:24.506898 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.507198 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.507202 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.507232 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.507197 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.507262 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.507386 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.507402 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.507680 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.507716 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.507930 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.507970 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.508215 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.508266 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.508349 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.508886 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.508912 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.509328 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.509388 14046 host.go:66] Checking if "addons-093588" exists ...
I1202 11:31:24.509745 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.509772 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.514152 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33003
I1202 11:31:24.514536 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.515551 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.515567 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.515900 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.516060 14046 main.go:141] libmachine: (addons-093588) Calling .GetState
I1202 11:31:24.516750 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.516781 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.518417 14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
I1202 11:31:24.520369 14046 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1202 11:31:24.521684 14046 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1202 11:31:24.521708 14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1202 11:31:24.521726 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
I1202 11:31:24.521971 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40347
I1202 11:31:24.522467 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.522948 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.522967 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.523428 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.524037 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.524073 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.525278 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:24.525908 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44411
I1202 11:31:24.525964 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:24.525982 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:24.526153 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
I1202 11:31:24.526308 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37297
I1202 11:31:24.526334 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:24.526703 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
I1202 11:31:24.526823 14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
I1202 11:31:24.527519 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46699
I1202 11:31:24.528000 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37157
I1202 11:31:24.538245 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44681
I1202 11:31:24.538353 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34773
I1202 11:31:24.538752 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.539216 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.539400 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.539426 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.539696 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.539718 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.539894 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45215
I1202 11:31:24.540083 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.540314 14046 main.go:141] libmachine: (addons-093588) Calling .GetState
I1202 11:31:24.540385 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.540470 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39455
I1202 11:31:24.540924 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.540946 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.541203 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.541339 14046 main.go:141] libmachine: (addons-093588) Calling .GetState
I1202 11:31:24.541664 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.541754 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.542181 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.542207 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.542260 14046 main.go:141] libmachine: (addons-093588) Calling .GetState
I1202 11:31:24.542496 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41727
I1202 11:31:24.542555 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.542776 14046 main.go:141] libmachine: (addons-093588) Calling .GetState
I1202 11:31:24.543259 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.543292 14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
I1202 11:31:24.543731 14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
I1202 11:31:24.544610 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.544628 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.544664 14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
I1202 11:31:24.545126 14046 out.go:177] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
I1202 11:31:24.545168 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.545195 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.545269 14046 out.go:177] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1202 11:31:24.545304 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.545395 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.545435 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.545470 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43447
I1202 11:31:24.545618 14046 main.go:141] libmachine: (addons-093588) Calling .GetState
I1202 11:31:24.546338 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.546345 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.546363 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.546433 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.546568 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.546647 14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
I1202 11:31:24.547233 14046 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
I1202 11:31:24.547248 14046 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
I1202 11:31:24.547254 14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
I1202 11:31:24.547267 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
I1202 11:31:24.547393 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.547414 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.547445 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.547394 14046 out.go:177] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
I1202 11:31:24.547458 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.547828 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.547575 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.547955 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.547954 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.547972 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.548039 14046 main.go:141] libmachine: (addons-093588) Calling .GetState
I1202 11:31:24.548201 14046 out.go:177] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1202 11:31:24.548709 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.548404 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.548416 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.548427 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.549098 14046 out.go:177] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
I1202 11:31:24.549523 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.549556 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.549153 14046 main.go:141] libmachine: (addons-093588) Calling .GetState
I1202 11:31:24.549174 14046 main.go:141] libmachine: (addons-093588) Calling .GetState
I1202 11:31:24.549198 14046 out.go:177] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1202 11:31:24.550744 14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
I1202 11:31:24.550442 14046 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
I1202 11:31:24.550949 14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1202 11:31:24.550979 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
I1202 11:31:24.551266 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:24.551295 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:24.551395 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:24.551754 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:24.551922 14046 out.go:177] - Using image docker.io/registry:2.8.3
I1202 11:31:24.553135 14046 out.go:177] - Using image docker.io/busybox:stable
I1202 11:31:24.553248 14046 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
I1202 11:31:24.553258 14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1202 11:31:24.553274 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
I1202 11:31:24.551866 14046 out.go:177] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1202 11:31:24.554056 14046 main.go:141] libmachine: Failed to make call to close driver server: unexpected EOF
I1202 11:31:24.554077 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:24.554099 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:24.554115 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:24.554335 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:24.554353 14046 main.go:141] libmachine: Making call to close connection to plugin binary
W1202 11:31:24.554435 14046 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1202 11:31:24.554726 14046 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1202 11:31:24.554748 14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1202 11:31:24.554766 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
I1202 11:31:24.555827 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38243
I1202 11:31:24.556405 14046 out.go:177] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1202 11:31:24.556577 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.557402 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.557420 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.558057 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.558321 14046 out.go:177] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1202 11:31:24.559321 14046 main.go:141] libmachine: (addons-093588) Calling .GetState
I1202 11:31:24.559334 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:24.559360 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:24.559379 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:24.559664 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
I1202 11:31:24.559799 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:24.559949 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
I1202 11:31:24.559958 14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
I1202 11:31:24.560013 14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
I1202 11:31:24.560428 14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
I1202 11:31:24.560586 14046 out.go:177] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1202 11:31:24.561196 14046 out.go:177] - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
I1202 11:31:24.561215 14046 out.go:177] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1202 11:31:24.562449 14046 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1202 11:31:24.562471 14046 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1202 11:31:24.562476 14046 out.go:177] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1202 11:31:24.562492 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
I1202 11:31:24.562522 14046 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1202 11:31:24.562532 14046 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1202 11:31:24.562547 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
I1202 11:31:24.562576 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:24.563149 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:24.564463 14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
I1202 11:31:24.564709 14046 out.go:177] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1202 11:31:24.565746 14046 out.go:177] - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
I1202 11:31:24.565810 14046 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1202 11:31:24.565819 14046 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1202 11:31:24.565837 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
I1202 11:31:24.566364 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:24.566848 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:24.566871 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:24.566945 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
I1202 11:31:24.567112 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:24.567307 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
I1202 11:31:24.567663 14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
I1202 11:31:24.567855 14046 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
I1202 11:31:24.567997 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:24.569106 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:24.569689 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:24.569721 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:24.569864 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
I1202 11:31:24.570008 14046 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
I1202 11:31:24.570020 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:24.570160 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
I1202 11:31:24.570283 14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
I1202 11:31:24.570565 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:24.570706 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:24.570730 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:24.571250 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:24.571276 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:24.571330 14046 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1202 11:31:24.571343 14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1202 11:31:24.571359 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
I1202 11:31:24.571508 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:24.571529 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:24.571688 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:24.571705 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:24.572025 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
I1202 11:31:24.572133 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
I1202 11:31:24.572305 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
I1202 11:31:24.572362 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:24.572452 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:24.572512 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
I1202 11:31:24.572564 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
I1202 11:31:24.572615 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:24.572667 14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
I1202 11:31:24.572746 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
I1202 11:31:24.572801 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
I1202 11:31:24.572834 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:24.573137 14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
I1202 11:31:24.573154 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
I1202 11:31:24.573204 14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
I1202 11:31:24.573482 14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
I1202 11:31:24.573956 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40303
I1202 11:31:24.574649 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.575018 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:24.575407 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.575423 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.575488 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:24.575503 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:24.575585 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
I1202 11:31:24.575718 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:24.575817 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
I1202 11:31:24.575932 14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
I1202 11:31:24.576165 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.576573 14046 main.go:141] libmachine: (addons-093588) Calling .GetState
I1202 11:31:24.578063 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45819
I1202 11:31:24.578225 14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
I1202 11:31:24.578617 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.579075 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.579092 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.579154 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
I1202 11:31:24.579518 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.579676 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.579703 14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
I1202 11:31:24.579828 14046 out.go:177] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
I1202 11:31:24.580453 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.580474 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.580807 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.580980 14046 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1202 11:31:24.580998 14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1202 11:31:24.581005 14046 main.go:141] libmachine: (addons-093588) Calling .GetState
I1202 11:31:24.581024 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
I1202 11:31:24.583031 14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
I1202 11:31:24.583283 14046 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I1202 11:31:24.583297 14046 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1202 11:31:24.583313 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
I1202 11:31:24.583668 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34937
I1202 11:31:24.583768 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37561
I1202 11:31:24.584191 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.584544 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.584976 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.584990 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.585105 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.585117 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.585563 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.586229 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:24.586400 14046 main.go:141] libmachine: (addons-093588) Calling .GetState
I1202 11:31:24.586629 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:24.586648 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:24.586682 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.586726 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:24.587020 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
I1202 11:31:24.587065 14046 main.go:141] libmachine: (addons-093588) Calling .GetState
I1202 11:31:24.587113 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:24.587132 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:24.587150 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:24.587302 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
I1202 11:31:24.587360 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
I1202 11:31:24.587553 14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
I1202 11:31:24.587779 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:24.587937 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
I1202 11:31:24.588096 14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
W1202 11:31:24.588952 14046 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36208->192.168.39.203:22: read: connection reset by peer
I1202 11:31:24.588979 14046 retry.go:31] will retry after 162.447336ms: ssh: handshake failed: read tcp 192.168.39.1:36208->192.168.39.203:22: read: connection reset by peer
I1202 11:31:24.589096 14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
I1202 11:31:24.589515 14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
I1202 11:31:24.591066 14046 out.go:177] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1202 11:31:24.591067 14046 out.go:177] - Using image docker.io/marcnuri/yakd:0.0.5
I1202 11:31:24.592118 14046 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
I1202 11:31:24.592128 14046 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1202 11:31:24.592143 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
I1202 11:31:24.592191 14046 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1202 11:31:24.592198 14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1202 11:31:24.592207 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
I1202 11:31:24.595378 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:24.595644 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36339
I1202 11:31:24.595803 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:24.595822 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:24.595882 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
I1202 11:31:24.596063 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:24.596170 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:24.596184 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:24.596388 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
I1202 11:31:24.596513 14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
I1202 11:31:24.596802 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:24.596819 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:24.596932 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:24.596948 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
W1202 11:31:24.597052 14046 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36224->192.168.39.203:22: read: connection reset by peer
I1202 11:31:24.597073 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
I1202 11:31:24.597073 14046 retry.go:31] will retry after 286.075051ms: ssh: handshake failed: read tcp 192.168.39.1:36224->192.168.39.203:22: read: connection reset by peer
I1202 11:31:24.597103 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:24.597222 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:24.597240 14046 main.go:141] libmachine: (addons-093588) Calling .GetState
I1202 11:31:24.597394 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
I1202 11:31:24.597488 14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
W1202 11:31:24.598040 14046 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36232->192.168.39.203:22: read: connection reset by peer
I1202 11:31:24.598119 14046 retry.go:31] will retry after 354.610148ms: ssh: handshake failed: read tcp 192.168.39.1:36232->192.168.39.203:22: read: connection reset by peer
I1202 11:31:24.598499 14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
I1202 11:31:24.599979 14046 out.go:177] - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
I1202 11:31:24.601395 14046 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1202 11:31:24.601408 14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
I1202 11:31:24.601419 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
I1202 11:31:24.603772 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:24.604008 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:24.604034 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:24.604262 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
I1202 11:31:24.604434 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:24.604557 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
I1202 11:31:24.604666 14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
I1202 11:31:24.838994 14046 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1202 11:31:24.839176 14046 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1202 11:31:24.858733 14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1202 11:31:24.883859 14046 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1202 11:31:24.883887 14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1202 11:31:24.906094 14046 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
I1202 11:31:24.906113 14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
I1202 11:31:24.933933 14046 node_ready.go:35] waiting up to 6m0s for node "addons-093588" to be "Ready" ...
I1202 11:31:24.937202 14046 node_ready.go:49] node "addons-093588" has status "Ready":"True"
I1202 11:31:24.937231 14046 node_ready.go:38] duration metric: took 3.246311ms for node "addons-093588" to be "Ready" ...
I1202 11:31:24.937242 14046 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1202 11:31:24.944817 14046 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5lcqk" in "kube-system" namespace to be "Ready" ...
I1202 11:31:24.959764 14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1202 11:31:24.974400 14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1202 11:31:25.028401 14046 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
I1202 11:31:25.028429 14046 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1202 11:31:25.064822 14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1202 11:31:25.066238 14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1202 11:31:25.067275 14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1202 11:31:25.094521 14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
I1202 11:31:25.096473 14046 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1202 11:31:25.096494 14046 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1202 11:31:25.120768 14046 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1202 11:31:25.120785 14046 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1202 11:31:25.127040 14046 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1202 11:31:25.127059 14046 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1202 11:31:25.143070 14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1202 11:31:25.211041 14046 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
I1202 11:31:25.211067 14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1202 11:31:25.224348 14046 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1202 11:31:25.224377 14046 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1202 11:31:25.306922 14046 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1202 11:31:25.306951 14046 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1202 11:31:25.312328 14046 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1202 11:31:25.312353 14046 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1202 11:31:25.354346 14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1202 11:31:25.367695 14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1202 11:31:25.440430 14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1202 11:31:25.489263 14046 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
I1202 11:31:25.489288 14046 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1202 11:31:25.494108 14046 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1202 11:31:25.494123 14046 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1202 11:31:25.505892 14046 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1202 11:31:25.505913 14046 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1202 11:31:25.736316 14046 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
I1202 11:31:25.736339 14046 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1202 11:31:25.747519 14046 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1202 11:31:25.747550 14046 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1202 11:31:25.785153 14046 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1202 11:31:25.785175 14046 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1202 11:31:26.043257 14046 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1202 11:31:26.043281 14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1202 11:31:26.066545 14046 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
I1202 11:31:26.066566 14046 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1202 11:31:26.144474 14046 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1202 11:31:26.144499 14046 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1202 11:31:26.257832 14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1202 11:31:26.275811 14046 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
I1202 11:31:26.275838 14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1202 11:31:26.434738 14046 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1202 11:31:26.434762 14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1202 11:31:26.548682 14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1202 11:31:26.657205 14046 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.81798875s)
I1202 11:31:26.657245 14046 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1202 11:31:26.856310 14046 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1202 11:31:26.856338 14046 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1202 11:31:26.953933 14046 pod_ready.go:103] pod "coredns-7c65d6cfc9-5lcqk" in "kube-system" namespace has status "Ready":"False"
I1202 11:31:27.167644 14046 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-093588" context rescaled to 1 replicas
I1202 11:31:27.259915 14046 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1202 11:31:27.259936 14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1202 11:31:27.322490 14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.463722869s)
I1202 11:31:27.322546 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:27.322564 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:27.322869 14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
I1202 11:31:27.322892 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:27.322905 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:27.322920 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:27.322928 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:27.323166 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:27.323183 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:27.562371 14046 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1202 11:31:27.562393 14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1202 11:31:27.852297 14046 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1202 11:31:27.852375 14046 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1202 11:31:28.172709 14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1202 11:31:28.882091 14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.922291512s)
I1202 11:31:28.882152 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:28.882167 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:28.882444 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:28.882462 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:28.882477 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:28.882489 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:28.882787 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:28.882836 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:29.144693 14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.170258844s)
I1202 11:31:29.144737 14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.079888384s)
I1202 11:31:29.144756 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:29.144770 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:29.144799 14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.078529681s)
I1202 11:31:29.144757 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:29.144846 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:29.144852 14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.07756002s)
I1202 11:31:29.144846 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:29.144869 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:29.144875 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:29.144879 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:29.145322 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:29.145331 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:29.145341 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:29.145346 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:29.145353 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:29.145349 14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
I1202 11:31:29.145356 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:29.145375 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:29.145388 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:29.145396 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:29.145403 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:29.145410 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:29.145364 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:29.145442 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:29.145449 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:29.145457 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:29.145463 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:29.145509 14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
I1202 11:31:29.145538 14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
I1202 11:31:29.145569 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:29.145631 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:29.145731 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:29.145771 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:29.145586 14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
I1202 11:31:29.145602 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:29.145796 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:29.145408 14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
I1202 11:31:29.145753 14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
I1202 11:31:29.147160 14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
I1202 11:31:29.147164 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:29.147221 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:29.196163 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:29.196191 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:29.196457 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:29.196478 14046 main.go:141] libmachine: Making call to close connection to plugin binary
W1202 11:31:29.196562 14046 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
I1202 11:31:29.226575 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:29.226598 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:29.226874 14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
I1202 11:31:29.226906 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:29.226912 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:29.494215 14046 pod_ready.go:103] pod "coredns-7c65d6cfc9-5lcqk" in "kube-system" namespace has status "Ready":"False"
I1202 11:31:29.630127 14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.535563804s)
I1202 11:31:29.630190 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:29.630203 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:29.630516 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:29.630567 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:29.630585 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:29.630598 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:29.630831 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:29.630845 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:29.630876 14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
I1202 11:31:31.078617 14046 pod_ready.go:93] pod "coredns-7c65d6cfc9-5lcqk" in "kube-system" namespace has status "Ready":"True"
I1202 11:31:31.078643 14046 pod_ready.go:82] duration metric: took 6.133804282s for pod "coredns-7c65d6cfc9-5lcqk" in "kube-system" namespace to be "Ready" ...
I1202 11:31:31.078656 14046 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sh425" in "kube-system" namespace to be "Ready" ...
I1202 11:31:31.604453 14046 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1202 11:31:31.604494 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
I1202 11:31:31.607456 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:31.607858 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:31.607881 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:31.608126 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
I1202 11:31:31.608355 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:31.608517 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
I1202 11:31:31.608723 14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
I1202 11:31:32.118212 14046 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1202 11:31:32.349435 14046 addons.go:234] Setting addon gcp-auth=true in "addons-093588"
I1202 11:31:32.349504 14046 host.go:66] Checking if "addons-093588" exists ...
I1202 11:31:32.349844 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:32.349891 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:32.364178 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41859
I1202 11:31:32.364679 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:32.365165 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:32.365189 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:32.365508 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:32.366128 14046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1202 11:31:32.366180 14046 main.go:141] libmachine: Launching plugin server for driver kvm2
I1202 11:31:32.380001 14046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33641
I1202 11:31:32.380476 14046 main.go:141] libmachine: () Calling .GetVersion
I1202 11:31:32.380998 14046 main.go:141] libmachine: Using API Version 1
I1202 11:31:32.381020 14046 main.go:141] libmachine: () Calling .SetConfigRaw
I1202 11:31:32.381308 14046 main.go:141] libmachine: () Calling .GetMachineName
I1202 11:31:32.381494 14046 main.go:141] libmachine: (addons-093588) Calling .GetState
I1202 11:31:32.382817 14046 main.go:141] libmachine: (addons-093588) Calling .DriverName
I1202 11:31:32.382990 14046 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1202 11:31:32.383016 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHHostname
I1202 11:31:32.385576 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:32.385914 14046 main.go:141] libmachine: (addons-093588) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:ff:d0", ip: ""} in network mk-addons-093588: {Iface:virbr1 ExpiryTime:2024-12-02 12:30:53 +0000 UTC Type:0 Mac:52:54:00:8a:ff:d0 Iaid: IPaddr:192.168.39.203 Prefix:24 Hostname:addons-093588 Clientid:01:52:54:00:8a:ff:d0}
I1202 11:31:32.385938 14046 main.go:141] libmachine: (addons-093588) DBG | domain addons-093588 has defined IP address 192.168.39.203 and MAC address 52:54:00:8a:ff:d0 in network mk-addons-093588
I1202 11:31:32.386042 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHPort
I1202 11:31:32.386234 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHKeyPath
I1202 11:31:32.386377 14046 main.go:141] libmachine: (addons-093588) Calling .GetSSHUsername
I1202 11:31:32.386522 14046 sshutil.go:53] new ssh client: &{IP:192.168.39.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20033-6257/.minikube/machines/addons-093588/id_rsa Username:docker}
I1202 11:31:32.584928 14046 pod_ready.go:93] pod "coredns-7c65d6cfc9-sh425" in "kube-system" namespace has status "Ready":"True"
I1202 11:31:32.584947 14046 pod_ready.go:82] duration metric: took 1.506285543s for pod "coredns-7c65d6cfc9-sh425" in "kube-system" namespace to be "Ready" ...
I1202 11:31:32.584957 14046 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-093588" in "kube-system" namespace to be "Ready" ...
I1202 11:31:32.592458 14046 pod_ready.go:93] pod "etcd-addons-093588" in "kube-system" namespace has status "Ready":"True"
I1202 11:31:32.592477 14046 pod_ready.go:82] duration metric: took 7.514441ms for pod "etcd-addons-093588" in "kube-system" namespace to be "Ready" ...
I1202 11:31:32.592489 14046 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-093588" in "kube-system" namespace to be "Ready" ...
I1202 11:31:32.601795 14046 pod_ready.go:93] pod "kube-apiserver-addons-093588" in "kube-system" namespace has status "Ready":"True"
I1202 11:31:32.601819 14046 pod_ready.go:82] duration metric: took 9.321566ms for pod "kube-apiserver-addons-093588" in "kube-system" namespace to be "Ready" ...
I1202 11:31:32.601831 14046 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-093588" in "kube-system" namespace to be "Ready" ...
I1202 11:31:32.611257 14046 pod_ready.go:93] pod "kube-controller-manager-addons-093588" in "kube-system" namespace has status "Ready":"True"
I1202 11:31:32.611277 14046 pod_ready.go:82] duration metric: took 9.438391ms for pod "kube-controller-manager-addons-093588" in "kube-system" namespace to be "Ready" ...
I1202 11:31:32.611290 14046 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8bqbx" in "kube-system" namespace to be "Ready" ...
I1202 11:31:32.628996 14046 pod_ready.go:93] pod "kube-proxy-8bqbx" in "kube-system" namespace has status "Ready":"True"
I1202 11:31:32.629014 14046 pod_ready.go:82] duration metric: took 17.716285ms for pod "kube-proxy-8bqbx" in "kube-system" namespace to be "Ready" ...
I1202 11:31:32.629025 14046 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-093588" in "kube-system" namespace to be "Ready" ...
I1202 11:31:33.216468 14046 pod_ready.go:93] pod "kube-scheduler-addons-093588" in "kube-system" namespace has status "Ready":"True"
I1202 11:31:33.216492 14046 pod_ready.go:82] duration metric: took 587.459361ms for pod "kube-scheduler-addons-093588" in "kube-system" namespace to be "Ready" ...
I1202 11:31:33.216500 14046 pod_ready.go:39] duration metric: took 8.279244651s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1202 11:31:33.216514 14046 api_server.go:52] waiting for apiserver process to appear ...
I1202 11:31:33.216560 14046 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1202 11:31:33.790935 14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.647826005s)
I1202 11:31:33.791001 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:33.791005 14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (8.436618964s)
I1202 11:31:33.791013 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:33.791043 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:33.791055 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:33.791071 14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.423342397s)
I1202 11:31:33.791100 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:33.791119 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:33.791168 14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.35070535s)
I1202 11:31:33.791202 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:33.791213 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:33.791317 14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.533440621s)
W1202 11:31:33.791345 14046 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1202 11:31:33.791371 14046 retry.go:31] will retry after 369.700432ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1202 11:31:33.791378 14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
I1202 11:31:33.791400 14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
I1202 11:31:33.791427 14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.242721006s)
I1202 11:31:33.791438 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:33.791437 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:33.791445 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:33.791446 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:33.791450 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:33.791468 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:33.791476 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:33.791488 14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
I1202 11:31:33.791454 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:33.791521 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:33.791456 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:33.791549 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:33.791493 14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
I1202 11:31:33.791527 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:33.791567 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:33.791575 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:33.791531 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:33.791584 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:33.791586 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:33.791593 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:33.791875 14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
I1202 11:31:33.791887 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:33.791898 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:33.791905 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:33.791907 14046 addons.go:475] Verifying addon metrics-server=true in "addons-093588"
I1202 11:31:33.791912 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:33.791950 14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
I1202 11:31:33.791969 14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
I1202 11:31:33.791987 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:33.791994 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:33.792000 14046 addons.go:475] Verifying addon ingress=true in "addons-093588"
I1202 11:31:33.792201 14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
I1202 11:31:33.792244 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:33.792252 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:33.792260 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:33.792268 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:33.793186 14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
I1202 11:31:33.793210 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:33.793216 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:33.793370 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:33.793378 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:33.793386 14046 addons.go:475] Verifying addon registry=true in "addons-093588"
I1202 11:31:33.794805 14046 out.go:177] * Verifying ingress addon...
I1202 11:31:33.794865 14046 out.go:177] * Verifying registry addon...
I1202 11:31:33.794866 14046 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-093588 service yakd-dashboard -n yakd-dashboard
I1202 11:31:33.797060 14046 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1202 11:31:33.797129 14046 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1202 11:31:33.824422 14046 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1202 11:31:33.824444 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:33.824727 14046 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1202 11:31:33.824743 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:34.161790 14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1202 11:31:34.343123 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:34.350325 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:34.694164 14046 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.311156085s)
I1202 11:31:34.694245 14046 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.477671289s)
I1202 11:31:34.694273 14046 api_server.go:72] duration metric: took 10.278626446s to wait for apiserver process to appear ...
I1202 11:31:34.694284 14046 api_server.go:88] waiting for apiserver healthz status ...
I1202 11:31:34.694305 14046 api_server.go:253] Checking apiserver healthz at https://192.168.39.203:8443/healthz ...
I1202 11:31:34.694162 14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.521398959s)
I1202 11:31:34.694468 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:34.694493 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:34.694736 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:34.694753 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:34.694762 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:34.694769 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:34.694740 14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
I1202 11:31:34.694983 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:34.694992 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:34.695001 14046 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-093588"
I1202 11:31:34.695677 14046 out.go:177] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
I1202 11:31:34.696349 14046 out.go:177] * Verifying csi-hostpath-driver addon...
I1202 11:31:34.697823 14046 out.go:177] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1202 11:31:34.698942 14046 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1202 11:31:34.698953 14046 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1202 11:31:34.699027 14046 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1202 11:31:34.717379 14046 api_server.go:279] https://192.168.39.203:8443/healthz returned 200:
ok
I1202 11:31:34.733562 14046 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1202 11:31:34.733577 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:34.734181 14046 api_server.go:141] control plane version: v1.31.2
I1202 11:31:34.734195 14046 api_server.go:131] duration metric: took 39.902425ms to wait for apiserver health ...
I1202 11:31:34.734202 14046 system_pods.go:43] waiting for kube-system pods to appear ...
I1202 11:31:34.772298 14046 system_pods.go:59] 19 kube-system pods found
I1202 11:31:34.772342 14046 system_pods.go:61] "amd-gpu-device-plugin-9x4xz" [55df6bd8-36c5-4864-8918-ac9425f2f9cb] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1202 11:31:34.772353 14046 system_pods.go:61] "coredns-7c65d6cfc9-5lcqk" [4d7cf83e-5dd7-42fb-982f-a45f12d7a40b] Running
I1202 11:31:34.772365 14046 system_pods.go:61] "coredns-7c65d6cfc9-sh425" [749fc6c5-7fb8-4660-876f-15b8c46c2e50] Running
I1202 11:31:34.772376 14046 system_pods.go:61] "csi-hostpath-attacher-0" [9090d43f-db00-4d9f-a761-7e784e7d66e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1202 11:31:34.772391 14046 system_pods.go:61] "csi-hostpath-resizer-0" [eacac2d8-005d-4f85-aa5f-5ee6725473a4] Pending
I1202 11:31:34.772405 14046 system_pods.go:61] "csi-hostpathplugin-jtbvg" [5558e993-a5eb-47db-b72e-028a2df87321] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1202 11:31:34.772416 14046 system_pods.go:61] "etcd-addons-093588" [133711db-b531-4f45-b56d-d479fc0d3bf2] Running
I1202 11:31:34.772427 14046 system_pods.go:61] "kube-apiserver-addons-093588" [4fa270b4-87bc-41ea-9c7e-d194a6a7a8dd] Running
I1202 11:31:34.772438 14046 system_pods.go:61] "kube-controller-manager-addons-093588" [b742eb2a-db16-4d33-8520-0bbb9c083127] Running
I1202 11:31:34.772452 14046 system_pods.go:61] "kube-ingress-dns-minikube" [93d2e4da-4868-4b1e-9718-bcc404d49f31] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1202 11:31:34.772462 14046 system_pods.go:61] "kube-proxy-8bqbx" [f637fa3b-3c50-489d-b864-5477922486f8] Running
I1202 11:31:34.772473 14046 system_pods.go:61] "kube-scheduler-addons-093588" [115de73f-014e-43eb-bf1c-4294dc736871] Running
I1202 11:31:34.772486 14046 system_pods.go:61] "metrics-server-84c5f94fbc-z5r8x" [b4ffaa02-f311-4afa-9113-ac7a8b7b5828] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1202 11:31:34.772500 14046 system_pods.go:61] "nvidia-device-plugin-daemonset-zprhh" [1292e790-4f25-49e8-a26d-3925b308ef53] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1202 11:31:34.772515 14046 system_pods.go:61] "registry-66c9cd494c-4dmpv" [4ba754ca-3bc4-4639-bbf2-9d771c422d1f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1202 11:31:34.772529 14046 system_pods.go:61] "registry-proxy-84nx4" [d2473044-c394-4b78-8583-763661c9c329] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1202 11:31:34.772544 14046 system_pods.go:61] "snapshot-controller-56fcc65765-5684m" [1b9feacd-f2e4-41f7-abc9-06e472d66f0b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1202 11:31:34.772558 14046 system_pods.go:61] "snapshot-controller-56fcc65765-dj6kc" [ea0e750d-7300-4238-9443-627b04eb650d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1202 11:31:34.772570 14046 system_pods.go:61] "storage-provisioner" [90465e3b-c05f-4fff-a0f6-c6a8b7703e89] Running
I1202 11:31:34.772583 14046 system_pods.go:74] duration metric: took 38.374545ms to wait for pod list to return data ...
I1202 11:31:34.772598 14046 default_sa.go:34] waiting for default service account to be created ...
I1202 11:31:34.779139 14046 default_sa.go:45] found service account: "default"
I1202 11:31:34.779155 14046 default_sa.go:55] duration metric: took 6.550708ms for default service account to be created ...
I1202 11:31:34.779163 14046 system_pods.go:116] waiting for k8s-apps to be running ...
I1202 11:31:34.807767 14046 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1202 11:31:34.807791 14046 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1202 11:31:34.811811 14046 system_pods.go:86] 19 kube-system pods found
I1202 11:31:34.811834 14046 system_pods.go:89] "amd-gpu-device-plugin-9x4xz" [55df6bd8-36c5-4864-8918-ac9425f2f9cb] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
I1202 11:31:34.811839 14046 system_pods.go:89] "coredns-7c65d6cfc9-5lcqk" [4d7cf83e-5dd7-42fb-982f-a45f12d7a40b] Running
I1202 11:31:34.811846 14046 system_pods.go:89] "coredns-7c65d6cfc9-sh425" [749fc6c5-7fb8-4660-876f-15b8c46c2e50] Running
I1202 11:31:34.811851 14046 system_pods.go:89] "csi-hostpath-attacher-0" [9090d43f-db00-4d9f-a761-7e784e7d66e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1202 11:31:34.811862 14046 system_pods.go:89] "csi-hostpath-resizer-0" [eacac2d8-005d-4f85-aa5f-5ee6725473a4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1202 11:31:34.811871 14046 system_pods.go:89] "csi-hostpathplugin-jtbvg" [5558e993-a5eb-47db-b72e-028a2df87321] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1202 11:31:34.811874 14046 system_pods.go:89] "etcd-addons-093588" [133711db-b531-4f45-b56d-d479fc0d3bf2] Running
I1202 11:31:34.811878 14046 system_pods.go:89] "kube-apiserver-addons-093588" [4fa270b4-87bc-41ea-9c7e-d194a6a7a8dd] Running
I1202 11:31:34.811882 14046 system_pods.go:89] "kube-controller-manager-addons-093588" [b742eb2a-db16-4d33-8520-0bbb9c083127] Running
I1202 11:31:34.811890 14046 system_pods.go:89] "kube-ingress-dns-minikube" [93d2e4da-4868-4b1e-9718-bcc404d49f31] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1202 11:31:34.811893 14046 system_pods.go:89] "kube-proxy-8bqbx" [f637fa3b-3c50-489d-b864-5477922486f8] Running
I1202 11:31:34.811900 14046 system_pods.go:89] "kube-scheduler-addons-093588" [115de73f-014e-43eb-bf1c-4294dc736871] Running
I1202 11:31:34.811907 14046 system_pods.go:89] "metrics-server-84c5f94fbc-z5r8x" [b4ffaa02-f311-4afa-9113-ac7a8b7b5828] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1202 11:31:34.811912 14046 system_pods.go:89] "nvidia-device-plugin-daemonset-zprhh" [1292e790-4f25-49e8-a26d-3925b308ef53] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1202 11:31:34.811920 14046 system_pods.go:89] "registry-66c9cd494c-4dmpv" [4ba754ca-3bc4-4639-bbf2-9d771c422d1f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1202 11:31:34.811925 14046 system_pods.go:89] "registry-proxy-84nx4" [d2473044-c394-4b78-8583-763661c9c329] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1202 11:31:34.811930 14046 system_pods.go:89] "snapshot-controller-56fcc65765-5684m" [1b9feacd-f2e4-41f7-abc9-06e472d66f0b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1202 11:31:34.811935 14046 system_pods.go:89] "snapshot-controller-56fcc65765-dj6kc" [ea0e750d-7300-4238-9443-627b04eb650d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1202 11:31:34.811941 14046 system_pods.go:89] "storage-provisioner" [90465e3b-c05f-4fff-a0f6-c6a8b7703e89] Running
I1202 11:31:34.811947 14046 system_pods.go:126] duration metric: took 32.779668ms to wait for k8s-apps to be running ...
I1202 11:31:34.811953 14046 system_svc.go:44] waiting for kubelet service to be running ....
I1202 11:31:34.811993 14046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1202 11:31:34.814772 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:34.814898 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:34.865148 14046 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1202 11:31:34.865170 14046 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1202 11:31:34.910684 14046 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1202 11:31:35.212476 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:35.302270 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:35.306145 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:35.704047 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:35.804040 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:35.804460 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:35.906004 14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.744152206s)
I1202 11:31:35.906055 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:35.906063 14046 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.094047231s)
I1202 11:31:35.906092 14046 system_svc.go:56] duration metric: took 1.094134923s WaitForService to wait for kubelet
I1202 11:31:35.906107 14046 kubeadm.go:582] duration metric: took 11.490458054s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1202 11:31:35.906141 14046 node_conditions.go:102] verifying NodePressure condition ...
I1202 11:31:35.906072 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:35.906478 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:35.906510 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:35.906522 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:35.906529 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:35.906722 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:35.906735 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:35.909515 14046 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1202 11:31:35.909536 14046 node_conditions.go:123] node cpu capacity is 2
I1202 11:31:35.909545 14046 node_conditions.go:105] duration metric: took 3.397157ms to run NodePressure ...
I1202 11:31:35.909555 14046 start.go:241] waiting for startup goroutines ...
I1202 11:31:36.207546 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:36.311696 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:36.323552 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:36.524594 14046 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.613849544s)
I1202 11:31:36.524666 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:36.524682 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:36.525003 14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
I1202 11:31:36.525022 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:36.525036 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:36.525064 14046 main.go:141] libmachine: Making call to close driver server
I1202 11:31:36.525075 14046 main.go:141] libmachine: (addons-093588) Calling .Close
I1202 11:31:36.525318 14046 main.go:141] libmachine: (addons-093588) DBG | Closing plugin on server side
I1202 11:31:36.525334 14046 main.go:141] libmachine: Successfully made call to close driver server
I1202 11:31:36.525348 14046 main.go:141] libmachine: Making call to close connection to plugin binary
I1202 11:31:36.526230 14046 addons.go:475] Verifying addon gcp-auth=true in "addons-093588"
I1202 11:31:36.528737 14046 out.go:177] * Verifying gcp-auth addon...
I1202 11:31:36.530986 14046 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1202 11:31:36.578001 14046 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1202 11:31:36.578020 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:36.704649 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:36.809208 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:36.809895 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:37.037424 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:37.203141 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:37.301723 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:37.302535 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:37.535104 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:37.703267 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:37.802335 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:37.802610 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:38.036909 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:38.204479 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:38.301632 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:38.302255 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:38.534810 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:38.704658 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:38.802708 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:38.803554 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:39.037174 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:39.292617 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:39.392307 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:39.392645 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:39.535333 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:39.704929 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:39.802557 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:39.803397 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:40.035299 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:40.205429 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:40.301785 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:40.301851 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:40.535337 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:40.703275 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:40.800655 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:40.801812 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:41.034994 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:41.204157 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:41.302831 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:41.303262 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:41.535151 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:41.703985 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:41.801319 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:41.801446 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:42.034352 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:42.203443 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:42.302890 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:42.304166 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:42.535013 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:42.703286 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:42.800816 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:42.801395 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:43.035672 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:43.203886 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:43.300980 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:43.301410 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:43.535388 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:43.704078 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:43.801008 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:43.801871 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:44.035750 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:44.241245 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:44.303030 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:44.303402 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:44.535189 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:44.704145 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:44.802535 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:44.803477 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:45.035547 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:45.205121 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:45.302246 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:45.306235 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:45.534465 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:45.703630 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:45.801940 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:45.802281 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:46.035662 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:46.203259 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:46.302067 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:46.302106 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:46.534762 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:46.703700 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:46.800864 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:46.802040 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:47.036727 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:47.204080 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:47.301844 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:47.301978 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:47.534983 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:47.704106 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:47.801707 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:47.803397 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:48.035137 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:48.203099 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:48.301547 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:48.301783 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:48.533891 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:48.703958 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:48.800958 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:48.801440 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:49.034561 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:49.204427 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:49.300634 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:49.301040 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:49.796093 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:49.796650 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:49.894974 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:49.895409 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:50.035131 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:50.205221 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:50.303043 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:50.303481 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:50.534978 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:50.704273 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:50.801772 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:50.801913 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:51.036221 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:51.202958 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:51.301672 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:51.303883 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:51.535974 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:51.705307 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:51.801763 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:51.802054 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:52.034979 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:52.204086 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:52.304301 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:52.305641 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:52.535427 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:52.704315 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:52.802423 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:52.802894 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:53.034594 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:53.204339 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:53.303653 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:53.306254 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:53.535883 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:53.704290 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:53.801531 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:53.802072 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:54.117303 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:54.203910 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:54.302087 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:54.302794 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:54.535306 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:54.703953 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:54.801915 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:54.801935 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:55.035228 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:55.203582 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:55.301814 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:55.302766 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:55.534254 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:55.703526 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:55.801462 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:55.801784 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:56.034736 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:56.204957 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:56.302824 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:56.303171 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:56.535416 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:56.704209 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:56.800476 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:56.802007 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:57.034734 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:57.204149 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:57.301587 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:57.302347 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:57.534833 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:57.704817 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:57.802147 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:57.802493 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:58.034493 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:58.203588 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:58.301828 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:58.302488 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:58.534315 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:58.705874 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:58.801208 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:58.802117 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:59.035206 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:59.204016 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:59.300680 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:59.301228 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:31:59.534267 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:31:59.703462 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:31:59.802411 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:31:59.805743 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:00.034944 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:00.205868 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:00.302403 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:32:00.302619 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:00.535930 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:00.705347 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:00.802373 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:32:00.802691 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:01.034165 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:01.203083 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:01.302108 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:01.302231 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:32:01.534962 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:01.704177 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:01.800790 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:32:01.801125 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:02.035522 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:02.207255 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:02.305529 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:32:02.305891 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:02.535277 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:02.703940 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:02.801885 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:02.801903 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:32:03.035451 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:03.203573 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:03.302065 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:03.302261 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:32:03.535720 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:03.703935 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:03.800844 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:32:03.801307 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:04.035517 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:04.209494 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:04.301432 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:32:04.302504 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:04.534911 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:04.703576 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:04.803619 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1202 11:32:04.804099 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:05.037027 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:05.204348 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:05.304406 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:05.305143 14046 kapi.go:107] duration metric: took 31.508010049s to wait for kubernetes.io/minikube-addons=registry ...
I1202 11:32:05.539056 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:05.704700 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:05.804304 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:06.039817 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:06.205353 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:06.310095 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:06.534977 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:06.704090 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:06.800726 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:07.035759 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:07.204852 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:07.301177 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:07.534942 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:07.703430 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:07.801253 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:08.035545 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:08.203485 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:08.304272 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:08.535354 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:08.703653 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:08.801345 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:09.035283 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:09.203667 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:09.301315 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:09.534575 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:09.708677 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:09.801812 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:10.034861 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:10.204571 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:10.685014 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:10.785858 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:10.786536 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:10.800928 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:11.034660 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:11.203914 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:11.303391 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:11.535680 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:11.704751 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:11.805498 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:12.043914 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:12.203937 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:12.301289 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:12.536468 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:13.048324 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:13.048675 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:13.048713 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:13.206976 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:13.306351 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:13.535323 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:13.704264 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:13.804182 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:14.035842 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:14.208917 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:14.301365 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:14.535026 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:14.703588 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:14.801725 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:15.034610 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:15.204327 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:15.304934 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:15.534739 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:15.704785 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:15.801778 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:16.034504 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:16.204196 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:16.630650 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:16.632171 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:16.703056 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:16.801188 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:17.034638 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:17.204193 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:17.305590 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:17.537824 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:17.703501 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:17.801783 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:18.274930 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:18.277014 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:18.324560 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:18.536509 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:18.704072 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:18.801749 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:19.036866 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:19.203700 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:19.305338 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:19.534946 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:19.703543 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:19.801503 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:20.033851 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:20.204394 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:20.301489 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:20.534043 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:20.704035 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:20.802048 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:21.035351 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:21.204075 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:21.304623 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:21.534698 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:21.703740 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:21.800941 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:22.035176 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:22.204538 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:22.303225 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:22.535611 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:22.703682 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:22.802117 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:23.379807 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:23.382707 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:23.383795 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:23.537984 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:23.707670 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:23.801120 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:24.035076 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:24.205347 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:24.301126 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:24.535567 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:24.703844 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:24.801658 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:25.035126 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:25.205250 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:25.302531 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:25.535923 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:25.703680 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:25.801499 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:26.034235 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:26.204524 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:26.301216 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:26.534899 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:26.703670 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:26.801160 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:27.034705 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:27.209222 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:27.311879 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:27.551203 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:27.706021 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:27.804614 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:28.035342 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:28.203667 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:28.301793 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:28.544354 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:28.711784 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:28.810267 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:29.034649 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:29.204547 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:29.301152 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:29.534413 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:29.704108 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:29.802865 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:30.035779 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:30.204665 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:30.304717 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:30.534685 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:30.703851 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:30.802376 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:31.037512 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:31.544834 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:31.545362 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:31.557069 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:31.706516 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:31.807268 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:32.034741 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:32.204171 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:32.301464 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:32.534454 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:32.704155 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:32.801829 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:33.034795 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:33.203510 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:33.306267 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:33.536390 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:33.708085 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:33.802088 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:34.034963 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:34.204776 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:34.308108 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:34.536044 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:34.703641 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:34.804438 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:35.035343 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:35.203465 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1202 11:32:35.303592 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:35.535810 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:35.709085 14046 kapi.go:107] duration metric: took 1m1.010057933s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1202 11:32:35.802151 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:36.035498 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:36.301273 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:36.534659 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:36.801419 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:37.035446 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:37.301607 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:37.534705 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:37.803178 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:38.035229 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:38.301283 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:38.535357 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:38.801506 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:39.035756 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:39.303845 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:39.536507 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:39.803141 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:40.035121 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:40.308205 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:40.535897 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:40.803283 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:41.035083 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:41.302929 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:41.534524 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:41.801381 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:42.035509 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:42.301517 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:42.534206 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:42.801696 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:43.363908 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:43.367795 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:43.534145 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:43.801034 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:44.035075 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:44.301680 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:44.535413 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:44.802593 14046 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1202 11:32:45.036399 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:45.303689 14046 kapi.go:107] duration metric: took 1m11.506622692s to wait for app.kubernetes.io/name=ingress-nginx ...
I1202 11:32:45.534723 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:46.035278 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:46.534932 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:47.034975 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:47.535739 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:48.034856 14046 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1202 11:32:48.535985 14046 kapi.go:107] duration metric: took 1m12.004997488s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1202 11:32:48.537647 14046 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-093588 cluster.
I1202 11:32:48.538975 14046 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1202 11:32:48.540091 14046 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1202 11:32:48.541177 14046 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, cloud-spanner, nvidia-device-plugin, default-storageclass, inspektor-gadget, metrics-server, amd-gpu-device-plugin, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I1202 11:32:48.542184 14046 addons.go:510] duration metric: took 1m24.126505676s for enable addons: enabled=[ingress-dns storage-provisioner cloud-spanner nvidia-device-plugin default-storageclass inspektor-gadget metrics-server amd-gpu-device-plugin yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I1202 11:32:48.542232 14046 start.go:246] waiting for cluster config update ...
I1202 11:32:48.542256 14046 start.go:255] writing updated cluster config ...
I1202 11:32:48.542565 14046 ssh_runner.go:195] Run: rm -f paused
I1202 11:32:48.592664 14046 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
I1202 11:32:48.594409 14046 out.go:177] * Done! kubectl is now configured to use "addons-093588" cluster and "default" namespace by default
==> CRI-O <==
Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.546645088Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139376546619807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603364,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3be55fb7-14d1-42f4-8e80-2a101d61c7e1 name=/runtime.v1.ImageService/ImageFsInfo
Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.547134748Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=20b53e1c-ffb8-4d26-b388-2abd52ddfd02 name=/runtime.v1.RuntimeService/ListContainers
Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.547272430Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=20b53e1c-ffb8-4d26-b388-2abd52ddfd02 name=/runtime.v1.RuntimeService/ListContainers
Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.547655364Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84b181ee3e2573f30c9d195729845653924d6c8b1d97de2d4398dfcbd7c14635,PodSandboxId:06d534d8ecc02eb081e6ebb75d130ed2acab22e2c5797be091916373e50dfaf8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733139376384172980,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bq9jt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27ac1b9f95162ba75981741aaa49f12158cf043a8fdf9d4744bcf8968c12e5c9,PodSandboxId:a4e1abefd1098d3205efa6945693cef50eee51966731dca093f03d8fe9c39aad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5acf10cd305853dc2271e3c818d342f3aeb3688b1256ab8f035fda04b91ed303,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733139238782351328,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cf016d6-ed93-4bb5-94f4-88b82ea95ba5,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6e1825b9c515e3ce7597d470b44e8214bc28c9ebaec69cfa21450036896bbd,PodSandboxId:c5e32f031e4c7e6e33e4a64d6e67180f37f3952c403f53fc2d0c22fefd622fc7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733139172191173333,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f6e4744-0d79-497c-8
3f9-2119471a0df3,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3decc9cb607d18fa1d54ce547fba6341d15db31ca406cbd9e3b67c7274100e4,PodSandboxId:4dd378cbb1fe84c8a415b23a3fa25fd73a272f3a269862b2ce85b9144c6d0c04,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733139164704900997,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-jl9qn,io.kubernetes.pod.namespace: ingress-nginx,i
o.kubernetes.pod.uid: 4a36ffd2-b76d-4ad2-bf9a-cbd21cdc413d,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1c9894577d1715169976fac54e5e92fc068f4f1d8e28d8e59c638e2c000387fa,PodSandboxId:8692758ceeb9f604124de345e5be36a361c70a6a1e43061b1528f416cab23b16,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526f
f8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733139147156540134,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-s2pxw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c4ae830b-5959-4890-ba55-97c4e9066abc,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a0c87a19737894551d5457b50e806a71136166c7c35634861f55dca03207a3,PodSandboxId:5bb9c45ae596373f4daceb762f50465fa5db581af1f3941f89861fac201463ef,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cb
fbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733139147066341976,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7l67n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: db5f3353-9ef7-4541-841d-b6d35db7f932,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3350f86d51adaa1294f38432a96d12f58dad3c88cb1b63f53d129a72f079c5a3,PodSandboxId:4f8cd12020a861322b02c0db26918f2917d69143cbb9270b2420ea69eccbd0f2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-
server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733139133148376734,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-z5r8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4ffaa02-f311-4afa-9113-ac7a8b7b5828,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd688bac9204be3d8b36fc16aa1eee1297e33d7bd568e04857088c350e23ddd2,PodSandboxId:727a2ad10b461920698fe35b169776c
ffd8807d863618b4787992c500f52f387,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733139125687482505,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-6bbl8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c2094412-6704-4c4f-8bc7-c21561ad7372,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53ee4c3f1d373dc3b1a44810905158446a9b776b
9f7557b488e4222707c7dafb,PodSandboxId:20efed53273cad9efdca3c9442f52945a7aabdbe33f73a910bd221e7aa893698,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733139123415093119,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-9x4xz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55df6bd8-36c5-4864-8918-ac9425f2f9cb,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
efc6f97dd502796421c7ace089a6a9f104b7940b859b4ddfda4f5c8b56f5da02,PodSandboxId:65d00dd604b777559653c55a6466bb79d7d85b16d8ff30bb6fdbf659da3855f4,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733139101270566536,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d2e4da-4868-4b1e-9718-bcc404d49f31,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777ee197b7d2c034cf98316513d742f29c06eabfe4ae6b95718bbd9472d75328,PodSandboxId:dadb7aad77d41b0ed6a6601b7a9b74f84cb5ae6718c6203d8a5c625a2be02f35,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733139092690590220,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90465e3b-c05f-4fff-a0f6-c6a8b7703e89,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2415b4c333fedc635f009550e81ffc647cb6138f2e8e22058310b19c273854e6,PodSandboxId:1140032f7ee0abaae0c2672c5ace62975828cb2dcd9301c81219f0212d577ae7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733139088826417039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sh425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 749fc6c5-7fb8-4660-876f-15b8c46c2e50,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":
53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28fe66023dde95d8f7e8873c7f0090dfa6587f0a0b99c6ef565e9d91cc3ba4d7,PodSandboxId:db3aa60a35b6c28dac42bfbc19ee0baa0cbdaadc7a8b33c39045fd1bac3cc2ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733139084116954134,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bqbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f637fa3b-3c50-489d-b864-5477922486f8,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5256bb6e86f1eaabc17c26882fc6f3963eb74f7b9150d179a9f02186f9066630,PodSandboxId:e4ff56ebcc0a5ebcbac1ee968ee8dc78ee68cf95153fd592260d179da6cff776,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733139073515107013,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c463271d0012074285091ad6a9bb5269,},Annotations:map[string]string{io.kubernetes.container.ha
sh: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c587d7cc1d105cfeab88badd7f6ae51fe0893d36407a7daa5a20e1edb9f3b55,PodSandboxId:7ecb4d3d09f040cde16ecce99cfad956056c0e5f19f4b0e7576a2c73f434bd7e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733139073495156905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bc34c7aba0bd63feec10df99ed16d0b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e083dadde5b123c44d41a89d29ae5e3b62ad8b1353811941cba2214a716328c,PodSandboxId:e2d72d2c0f73b8d7a3f234acc53e9b311321c709dd07383e47a37bbe344a59bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733139073507420703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a54bf73c0b779fcefc9f9ad61889351,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bf4cf0d44bb80164410a59ec2d63ddecec0cd22ba61e826143ac7e5048dfe96,PodSandboxId:94204ef648dac42b0379640042a7c974af9203d300edda9454e6243defccdd64,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733139073500988739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb05324ef0da57c6be9879c98c60ce72,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=20b53e1c-ffb8-4d26-b388-2abd52ddfd02 name=/runtime.v1.RuntimeService/ListContainers
Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.562538699Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=0f7c457e-7579-4a90-8d35-94e904b03c15 name=/runtime.v1.RuntimeService/ListPodSandbox
Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.562931824Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:06d534d8ecc02eb081e6ebb75d130ed2acab22e2c5797be091916373e50dfaf8,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-bq9jt,Uid:6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139375394317316,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bq9jt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:36:15.077147569Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a4e1abefd1098d3205efa6945693cef50eee51966731dca093f03d8fe9c39aad,Metadata:&PodSandboxMetadata{Name:nginx,Uid:9cf016d6-ed93-4bb5-94f4-88b82ea95ba5,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1733139236134573190,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cf016d6-ed93-4bb5-94f4-88b82ea95ba5,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:33:55.813784593Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c5e32f031e4c7e6e33e4a64d6e67180f37f3952c403f53fc2d0c22fefd622fc7,Metadata:&PodSandboxMetadata{Name:busybox,Uid:9f6e4744-0d79-497c-83f9-2119471a0df3,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139169488004241,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f6e4744-0d79-497c-83f9-2119471a0df3,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:32:49.179630307Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4dd378cbb1fe84c8a4
15b23a3fa25fd73a272f3a269862b2ce85b9144c6d0c04,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-5f85ff4588-jl9qn,Uid:4a36ffd2-b76d-4ad2-bf9a-cbd21cdc413d,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139157786858360,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-jl9qn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4a36ffd2-b76d-4ad2-bf9a-cbd21cdc413d,pod-template-hash: 5f85ff4588,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:31:33.568934434Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8692758ceeb9f604124de345e5be36a361c70a6a1e43061b1528f416cab23b16,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-s2pxw,Uid:c4ae830b-5959-4890-ba55-97c4e9066abc,Namespace:ingress-nginx,Attempt:0,}
,State:SANDBOX_NOTREADY,CreatedAt:1733139095466543770,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: b805dbc6-2b5f-4e42-adf4-40c1eff8ead2,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: b805dbc6-2b5f-4e42-adf4-40c1eff8ead2,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-s2pxw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c4ae830b-5959-4890-ba55-97c4e9066abc,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:31:33.753971573Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5bb9c45ae596373f4daceb762f50465fa5db581af1f3941f89861fac201463ef,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-7l67n,Uid:db5f3353-9ef7-4541-841d-b6d35db7f932,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,C
reatedAt:1733139095432878077,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: b70b0154-91d5-4b9d-80d0-955dc822ff8f,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: b70b0154-91d5-4b9d-80d0-955dc822ff8f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-7l67n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: db5f3353-9ef7-4541-841d-b6d35db7f932,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:31:33.692954067Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4f8cd12020a861322b02c0db26918f2917d69143cbb9270b2420ea69eccbd0f2,Metadata:&PodSandboxMetadata{Name:metrics-server-84c5f94fbc-z5r8x,Uid:b4ffaa02-f311-4afa-9113-ac7a8b7b5828,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139090850154614,Lab
els:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-84c5f94fbc-z5r8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4ffaa02-f311-4afa-9113-ac7a8b7b5828,k8s-app: metrics-server,pod-template-hash: 84c5f94fbc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:31:30.233577345Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:727a2ad10b461920698fe35b169776cffd8807d863618b4787992c500f52f387,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-86d989889c-6bbl8,Uid:c2094412-6704-4c4f-8bc7-c21561ad7372,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139089909492415,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: local-path-provisioner-86d989889c-6bbl8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c2094412-6704-4c4f-8bc7-c21561ad7372,pod-template-hash: 86d989889c,},Annotations:map[string]string{kuberne
tes.io/config.seen: 2024-12-02T11:31:29.114140663Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dadb7aad77d41b0ed6a6601b7a9b74f84cb5ae6718c6203d8a5c625a2be02f35,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:90465e3b-c05f-4fff-a0f6-c6a8b7703e89,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139089310304757,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90465e3b-c05f-4fff-a0f6-c6a8b7703e89,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storag
e-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-02T11:31:28.880933004Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:65d00dd604b777559653c55a6466bb79d7d85b16d8ff30bb6fdbf659da3855f4,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:93d2e4da-4868-4b1e-9718-bcc404d49f31,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139087638795064,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d2e4da-4868-4b1e-9718-bcc404d49f31,},Ann
otations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2024-12-02T11:31:27.317731601Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:20efed53273cad9efdca3c9442f52945a7aabdbe33f73a910bd221e7aa893698,Metadata:&PodSandboxMeta
data{Name:amd-gpu-device-plugin-9x4xz,Uid:55df6bd8-36c5-4864-8918-ac9425f2f9cb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139087295738499,Labels:map[string]string{controller-revision-hash: 59cf7d9b45,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-9x4xz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55df6bd8-36c5-4864-8918-ac9425f2f9cb,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:31:26.978148659Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1140032f7ee0abaae0c2672c5ace62975828cb2dcd9301c81219f0212d577ae7,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-sh425,Uid:749fc6c5-7fb8-4660-876f-15b8c46c2e50,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139085427682825,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-sh425,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 749fc6c5-7fb8-4660-876f-15b8c46c2e50,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:31:24.221453920Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:db3aa60a35b6c28dac42bfbc19ee0baa0cbdaadc7a8b33c39045fd1bac3cc2ad,Metadata:&PodSandboxMetadata{Name:kube-proxy-8bqbx,Uid:f637fa3b-3c50-489d-b864-5477922486f8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139084010336031,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-8bqbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f637fa3b-3c50-489d-b864-5477922486f8,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:31:23.103682635Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:94204ef648dac42b0379640042a7c974af9203d300ed
da9454e6243defccdd64,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-093588,Uid:fb05324ef0da57c6be9879c98c60ce72,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139073318258743,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb05324ef0da57c6be9879c98c60ce72,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: fb05324ef0da57c6be9879c98c60ce72,kubernetes.io/config.seen: 2024-12-02T11:31:12.647806322Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e4ff56ebcc0a5ebcbac1ee968ee8dc78ee68cf95153fd592260d179da6cff776,Metadata:&PodSandboxMetadata{Name:etcd-addons-093588,Uid:c463271d0012074285091ad6a9bb5269,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139073315298857,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.k
ubernetes.pod.name: etcd-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c463271d0012074285091ad6a9bb5269,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.203:2379,kubernetes.io/config.hash: c463271d0012074285091ad6a9bb5269,kubernetes.io/config.seen: 2024-12-02T11:31:12.647808573Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e2d72d2c0f73b8d7a3f234acc53e9b311321c709dd07383e47a37bbe344a59bc,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-093588,Uid:5a54bf73c0b779fcefc9f9ad61889351,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139073311589878,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a54bf73c0b779fcefc9f9ad61889351,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserve
r.advertise-address.endpoint: 192.168.39.203:8443,kubernetes.io/config.hash: 5a54bf73c0b779fcefc9f9ad61889351,kubernetes.io/config.seen: 2024-12-02T11:31:12.647798299Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7ecb4d3d09f040cde16ecce99cfad956056c0e5f19f4b0e7576a2c73f434bd7e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-093588,Uid:2bc34c7aba0bd63feec10df99ed16d0b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139073309670964,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bc34c7aba0bd63feec10df99ed16d0b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2bc34c7aba0bd63feec10df99ed16d0b,kubernetes.io/config.seen: 2024-12-02T11:31:12.647807592Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=0f7c457e-7579-4a90-8d35-94e904
b03c15 name=/runtime.v1.RuntimeService/ListPodSandbox
Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.564019671Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ffe7253-5388-4a4d-995c-614e76e96641 name=/runtime.v1.RuntimeService/ListContainers
Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.564091697Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ffe7253-5388-4a4d-995c-614e76e96641 name=/runtime.v1.RuntimeService/ListContainers
Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.565341730Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84b181ee3e2573f30c9d195729845653924d6c8b1d97de2d4398dfcbd7c14635,PodSandboxId:06d534d8ecc02eb081e6ebb75d130ed2acab22e2c5797be091916373e50dfaf8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733139376384172980,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bq9jt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27ac1b9f95162ba75981741aaa49f12158cf043a8fdf9d4744bcf8968c12e5c9,PodSandboxId:a4e1abefd1098d3205efa6945693cef50eee51966731dca093f03d8fe9c39aad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5acf10cd305853dc2271e3c818d342f3aeb3688b1256ab8f035fda04b91ed303,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733139238782351328,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cf016d6-ed93-4bb5-94f4-88b82ea95ba5,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6e1825b9c515e3ce7597d470b44e8214bc28c9ebaec69cfa21450036896bbd,PodSandboxId:c5e32f031e4c7e6e33e4a64d6e67180f37f3952c403f53fc2d0c22fefd622fc7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733139172191173333,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f6e4744-0d79-497c-8
3f9-2119471a0df3,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3decc9cb607d18fa1d54ce547fba6341d15db31ca406cbd9e3b67c7274100e4,PodSandboxId:4dd378cbb1fe84c8a415b23a3fa25fd73a272f3a269862b2ce85b9144c6d0c04,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733139164704900997,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-jl9qn,io.kubernetes.pod.namespace: ingress-nginx,i
o.kubernetes.pod.uid: 4a36ffd2-b76d-4ad2-bf9a-cbd21cdc413d,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1c9894577d1715169976fac54e5e92fc068f4f1d8e28d8e59c638e2c000387fa,PodSandboxId:8692758ceeb9f604124de345e5be36a361c70a6a1e43061b1528f416cab23b16,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526f
f8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733139147156540134,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-s2pxw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c4ae830b-5959-4890-ba55-97c4e9066abc,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a0c87a19737894551d5457b50e806a71136166c7c35634861f55dca03207a3,PodSandboxId:5bb9c45ae596373f4daceb762f50465fa5db581af1f3941f89861fac201463ef,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cb
fbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733139147066341976,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7l67n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: db5f3353-9ef7-4541-841d-b6d35db7f932,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3350f86d51adaa1294f38432a96d12f58dad3c88cb1b63f53d129a72f079c5a3,PodSandboxId:4f8cd12020a861322b02c0db26918f2917d69143cbb9270b2420ea69eccbd0f2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-
server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733139133148376734,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-z5r8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4ffaa02-f311-4afa-9113-ac7a8b7b5828,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd688bac9204be3d8b36fc16aa1eee1297e33d7bd568e04857088c350e23ddd2,PodSandboxId:727a2ad10b461920698fe35b169776c
ffd8807d863618b4787992c500f52f387,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733139125687482505,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-6bbl8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c2094412-6704-4c4f-8bc7-c21561ad7372,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53ee4c3f1d373dc3b1a44810905158446a9b776b
9f7557b488e4222707c7dafb,PodSandboxId:20efed53273cad9efdca3c9442f52945a7aabdbe33f73a910bd221e7aa893698,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733139123415093119,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-9x4xz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55df6bd8-36c5-4864-8918-ac9425f2f9cb,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
efc6f97dd502796421c7ace089a6a9f104b7940b859b4ddfda4f5c8b56f5da02,PodSandboxId:65d00dd604b777559653c55a6466bb79d7d85b16d8ff30bb6fdbf659da3855f4,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733139101270566536,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d2e4da-4868-4b1e-9718-bcc404d49f31,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777ee197b7d2c034cf98316513d742f29c06eabfe4ae6b95718bbd9472d75328,PodSandboxId:dadb7aad77d41b0ed6a6601b7a9b74f84cb5ae6718c6203d8a5c625a2be02f35,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733139092690590220,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90465e3b-c05f-4fff-a0f6-c6a8b7703e89,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2415b4c333fedc635f009550e81ffc647cb6138f2e8e22058310b19c273854e6,PodSandboxId:1140032f7ee0abaae0c2672c5ace62975828cb2dcd9301c81219f0212d577ae7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733139088826417039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sh425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 749fc6c5-7fb8-4660-876f-15b8c46c2e50,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":
53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28fe66023dde95d8f7e8873c7f0090dfa6587f0a0b99c6ef565e9d91cc3ba4d7,PodSandboxId:db3aa60a35b6c28dac42bfbc19ee0baa0cbdaadc7a8b33c39045fd1bac3cc2ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733139084116954134,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bqbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f637fa3b-3c50-489d-b864-5477922486f8,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5256bb6e86f1eaabc17c26882fc6f3963eb74f7b9150d179a9f02186f9066630,PodSandboxId:e4ff56ebcc0a5ebcbac1ee968ee8dc78ee68cf95153fd592260d179da6cff776,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733139073515107013,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c463271d0012074285091ad6a9bb5269,},Annotations:map[string]string{io.kubernetes.container.ha
sh: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c587d7cc1d105cfeab88badd7f6ae51fe0893d36407a7daa5a20e1edb9f3b55,PodSandboxId:7ecb4d3d09f040cde16ecce99cfad956056c0e5f19f4b0e7576a2c73f434bd7e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733139073495156905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bc34c7aba0bd63feec10df99ed16d0b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e083dadde5b123c44d41a89d29ae5e3b62ad8b1353811941cba2214a716328c,PodSandboxId:e2d72d2c0f73b8d7a3f234acc53e9b311321c709dd07383e47a37bbe344a59bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733139073507420703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a54bf73c0b779fcefc9f9ad61889351,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bf4cf0d44bb80164410a59ec2d63ddecec0cd22ba61e826143ac7e5048dfe96,PodSandboxId:94204ef648dac42b0379640042a7c974af9203d300edda9454e6243defccdd64,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733139073500988739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb05324ef0da57c6be9879c98c60ce72,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ffe7253-5388-4a4d-995c-614e76e96641 name=/runtime.v1.RuntimeService/ListContainers
Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.566581722Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: 6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,},},}" file="otel-collector/interceptors.go:62" id=335f567d-04f6-41e9-ad27-fce7729ca926 name=/runtime.v1.RuntimeService/ListPodSandbox
Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.566835427Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:06d534d8ecc02eb081e6ebb75d130ed2acab22e2c5797be091916373e50dfaf8,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-bq9jt,Uid:6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139375394317316,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bq9jt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:36:15.077147569Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=335f567d-04f6-41e9-ad27-fce7729ca926 name=/runtime.v1.RuntimeService/ListPodSandbox
Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.567235277Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:06d534d8ecc02eb081e6ebb75d130ed2acab22e2c5797be091916373e50dfaf8,Verbose:false,}" file="otel-collector/interceptors.go:62" id=69571f5b-a998-4682-8ffb-280932377acc name=/runtime.v1.RuntimeService/PodSandboxStatus
Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.567342973Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:06d534d8ecc02eb081e6ebb75d130ed2acab22e2c5797be091916373e50dfaf8,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-bq9jt,Uid:6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733139375394317316,Network:&PodSandboxNetworkStatus{Ip:10.244.0.33,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:nil,},},},Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bq9jt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-02T11:36:15.077147569Z,kubernetes.io/config.source:
api,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=69571f5b-a998-4682-8ffb-280932377acc name=/runtime.v1.RuntimeService/PodSandboxStatus
Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.567774926Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,},},}" file="otel-collector/interceptors.go:62" id=22865bb0-5039-44a3-bc98-590e7bb96d48 name=/runtime.v1.RuntimeService/ListContainers
Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.567896290Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22865bb0-5039-44a3-bc98-590e7bb96d48 name=/runtime.v1.RuntimeService/ListContainers
Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.567954142Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84b181ee3e2573f30c9d195729845653924d6c8b1d97de2d4398dfcbd7c14635,PodSandboxId:06d534d8ecc02eb081e6ebb75d130ed2acab22e2c5797be091916373e50dfaf8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733139376384172980,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bq9jt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22865bb0-5039-44a3-bc98-590e7bb96d48 name=/runtime.v1.RuntimeService/ListContainers
Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.568321662Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:84b181ee3e2573f30c9d195729845653924d6c8b1d97de2d4398dfcbd7c14635,Verbose:false,}" file="otel-collector/interceptors.go:62" id=7a911f85-bdec-4f21-8ac0-155e679d1454 name=/runtime.v1.RuntimeService/ContainerStatus
Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.568434373Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:84b181ee3e2573f30c9d195729845653924d6c8b1d97de2d4398dfcbd7c14635,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1733139376454965403,StartedAt:1733139376496518098,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:docker.io/kicbase/echo-server:1.0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bq9jt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restar
tCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49/containers/hello-world-app/f866a940,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49/volumes/kubernetes.io~projected/kube-api-access-5sw9v,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/
var/log/pods/default_hello-world-app-55bf9c44b4-bq9jt_6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49/hello-world-app/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=7a911f85-bdec-4f21-8ac0-155e679d1454 name=/runtime.v1.RuntimeService/ContainerStatus
Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.617913861Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=47fc6f50-92d0-47dc-b333-7493e17ffc76 name=/runtime.v1.RuntimeService/Version
Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.617999526Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=47fc6f50-92d0-47dc-b333-7493e17ffc76 name=/runtime.v1.RuntimeService/Version
Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.620857813Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e37ce77b-bb6b-4dcf-82e5-ac930c1a0c6b name=/runtime.v1.ImageService/ImageFsInfo
Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.622085992Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733139376622059403,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:603364,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e37ce77b-bb6b-4dcf-82e5-ac930c1a0c6b name=/runtime.v1.ImageService/ImageFsInfo
Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.623380090Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2ee9904-4e97-4d1c-902f-0cf42f9bddfb name=/runtime.v1.RuntimeService/ListContainers
Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.623447426Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2ee9904-4e97-4d1c-902f-0cf42f9bddfb name=/runtime.v1.RuntimeService/ListContainers
Dec 02 11:36:16 addons-093588 crio[664]: time="2024-12-02 11:36:16.624293948Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84b181ee3e2573f30c9d195729845653924d6c8b1d97de2d4398dfcbd7c14635,PodSandboxId:06d534d8ecc02eb081e6ebb75d130ed2acab22e2c5797be091916373e50dfaf8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733139376384172980,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-bq9jt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27ac1b9f95162ba75981741aaa49f12158cf043a8fdf9d4744bcf8968c12e5c9,PodSandboxId:a4e1abefd1098d3205efa6945693cef50eee51966731dca093f03d8fe9c39aad,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5acf10cd305853dc2271e3c818d342f3aeb3688b1256ab8f035fda04b91ed303,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733139238782351328,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9cf016d6-ed93-4bb5-94f4-88b82ea95ba5,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c6e1825b9c515e3ce7597d470b44e8214bc28c9ebaec69cfa21450036896bbd,PodSandboxId:c5e32f031e4c7e6e33e4a64d6e67180f37f3952c403f53fc2d0c22fefd622fc7,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733139172191173333,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9f6e4744-0d79-497c-8
3f9-2119471a0df3,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3decc9cb607d18fa1d54ce547fba6341d15db31ca406cbd9e3b67c7274100e4,PodSandboxId:4dd378cbb1fe84c8a415b23a3fa25fd73a272f3a269862b2ce85b9144c6d0c04,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733139164704900997,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-jl9qn,io.kubernetes.pod.namespace: ingress-nginx,i
o.kubernetes.pod.uid: 4a36ffd2-b76d-4ad2-bf9a-cbd21cdc413d,},Annotations:map[string]string{io.kubernetes.container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1c9894577d1715169976fac54e5e92fc068f4f1d8e28d8e59c638e2c000387fa,PodSandboxId:8692758ceeb9f604124de345e5be36a361c70a6a1e43061b1528f416cab23b16,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526f
f8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733139147156540134,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-s2pxw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c4ae830b-5959-4890-ba55-97c4e9066abc,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29a0c87a19737894551d5457b50e806a71136166c7c35634861f55dca03207a3,PodSandboxId:5bb9c45ae596373f4daceb762f50465fa5db581af1f3941f89861fac201463ef,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cb
fbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733139147066341976,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-7l67n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: db5f3353-9ef7-4541-841d-b6d35db7f932,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3350f86d51adaa1294f38432a96d12f58dad3c88cb1b63f53d129a72f079c5a3,PodSandboxId:4f8cd12020a861322b02c0db26918f2917d69143cbb9270b2420ea69eccbd0f2,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-
server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733139133148376734,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-z5r8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4ffaa02-f311-4afa-9113-ac7a8b7b5828,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd688bac9204be3d8b36fc16aa1eee1297e33d7bd568e04857088c350e23ddd2,PodSandboxId:727a2ad10b461920698fe35b169776c
ffd8807d863618b4787992c500f52f387,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733139125687482505,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-6bbl8,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: c2094412-6704-4c4f-8bc7-c21561ad7372,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53ee4c3f1d373dc3b1a44810905158446a9b776b
9f7557b488e4222707c7dafb,PodSandboxId:20efed53273cad9efdca3c9442f52945a7aabdbe33f73a910bd221e7aa893698,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733139123415093119,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-9x4xz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55df6bd8-36c5-4864-8918-ac9425f2f9cb,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
efc6f97dd502796421c7ace089a6a9f104b7940b859b4ddfda4f5c8b56f5da02,PodSandboxId:65d00dd604b777559653c55a6466bb79d7d85b16d8ff30bb6fdbf659da3855f4,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733139101270566536,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93d2e4da-4868-4b1e-9718-bcc404d49f31,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:777ee197b7d2c034cf98316513d742f29c06eabfe4ae6b95718bbd9472d75328,PodSandboxId:dadb7aad77d41b0ed6a6601b7a9b74f84cb5ae6718c6203d8a5c625a2be02f35,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733139092690590220,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90465e3b-c05f-4fff-a0f6-c6a8b7703e89,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminati
on-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2415b4c333fedc635f009550e81ffc647cb6138f2e8e22058310b19c273854e6,PodSandboxId:1140032f7ee0abaae0c2672c5ace62975828cb2dcd9301c81219f0212d577ae7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733139088826417039,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sh425,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 749fc6c5-7fb8-4660-876f-15b8c46c2e50,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":
53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28fe66023dde95d8f7e8873c7f0090dfa6587f0a0b99c6ef565e9d91cc3ba4d7,PodSandboxId:db3aa60a35b6c28dac42bfbc19ee0baa0cbdaadc7a8b33c39045fd1bac3cc2ad,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733139084116954134,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bqbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f637fa3b-3c50-489d-b864-5477922486f8,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5256bb6e86f1eaabc17c26882fc6f3963eb74f7b9150d179a9f02186f9066630,PodSandboxId:e4ff56ebcc0a5ebcbac1ee968ee8dc78ee68cf95153fd592260d179da6cff776,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733139073515107013,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c463271d0012074285091ad6a9bb5269,},Annotations:map[string]string{io.kubernetes.container.ha
sh: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c587d7cc1d105cfeab88badd7f6ae51fe0893d36407a7daa5a20e1edb9f3b55,PodSandboxId:7ecb4d3d09f040cde16ecce99cfad956056c0e5f19f4b0e7576a2c73f434bd7e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733139073495156905,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bc34c7aba0bd63feec10df99ed16d0b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e083dadde5b123c44d41a89d29ae5e3b62ad8b1353811941cba2214a716328c,PodSandboxId:e2d72d2c0f73b8d7a3f234acc53e9b311321c709dd07383e47a37bbe344a59bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733139073507420703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a54bf73c0b779fcefc9f9ad61889351,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6bf4cf0d44bb80164410a59ec2d63ddecec0cd22ba61e826143ac7e5048dfe96,PodSandboxId:94204ef648dac42b0379640042a7c974af9203d300edda9454e6243defccdd64,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733139073500988739,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-093588,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb05324ef0da57c6be9879c98c60ce72,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2ee9904-4e97-4d1c-902f-0cf42f9bddfb name=/runtime.v1.RuntimeService/ListContainers
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
84b181ee3e257 docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 Less than a second ago Running hello-world-app 0 06d534d8ecc02 hello-world-app-55bf9c44b4-bq9jt
27ac1b9f95162 docker.io/library/nginx@sha256:5acf10cd305853dc2271e3c818d342f3aeb3688b1256ab8f035fda04b91ed303 2 minutes ago Running nginx 0 a4e1abefd1098 nginx
5c6e1825b9c51 gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 3 minutes ago Running busybox 0 c5e32f031e4c7 busybox
a3decc9cb607d registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b 3 minutes ago Running controller 0 4dd378cbb1fe8 ingress-nginx-controller-5f85ff4588-jl9qn
1c9894577d171 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f 3 minutes ago Exited patch 0 8692758ceeb9f ingress-nginx-admission-patch-s2pxw
29a0c87a19737 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f 3 minutes ago Exited create 0 5bb9c45ae5963 ingress-nginx-admission-create-7l67n
3350f86d51ada registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a 4 minutes ago Running metrics-server 0 4f8cd12020a86 metrics-server-84c5f94fbc-z5r8x
bd688bac9204b docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef 4 minutes ago Running local-path-provisioner 0 727a2ad10b461 local-path-provisioner-86d989889c-6bbl8
53ee4c3f1d373 docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 4 minutes ago Running amd-gpu-device-plugin 0 20efed53273ca amd-gpu-device-plugin-9x4xz
efc6f97dd5027 gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab 4 minutes ago Running minikube-ingress-dns 0 65d00dd604b77 kube-ingress-dns-minikube
777ee197b7d2c 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 dadb7aad77d41 storage-provisioner
2415b4c333fed c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 4 minutes ago Running coredns 0 1140032f7ee0a coredns-7c65d6cfc9-sh425
28fe66023dde9 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38 4 minutes ago Running kube-proxy 0 db3aa60a35b6c kube-proxy-8bqbx
5256bb6e86f1e 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4 5 minutes ago Running etcd 0 e4ff56ebcc0a5 etcd-addons-093588
3e083dadde5b1 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173 5 minutes ago Running kube-apiserver 0 e2d72d2c0f73b kube-apiserver-addons-093588
6bf4cf0d44bb8 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503 5 minutes ago Running kube-controller-manager 0 94204ef648dac kube-controller-manager-addons-093588
9c587d7cc1d10 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856 5 minutes ago Running kube-scheduler 0 7ecb4d3d09f04 kube-scheduler-addons-093588
==> coredns [2415b4c333fedc635f009550e81ffc647cb6138f2e8e22058310b19c273854e6] <==
[INFO] 10.244.0.23:53682 - 5608 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00013055s
[INFO] 10.244.0.23:41205 - 33515 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000131468s
[INFO] 10.244.0.23:49880 - 4478 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000119977s
[INFO] 10.244.0.23:36694 - 64123 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000193882s
[INFO] 10.244.0.23:42791 - 63566 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000172378s
[INFO] 10.244.0.23:55478 - 16821 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001265351s
[INFO] 10.244.0.23:49121 - 51480 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.001441104s
[INFO] 10.244.0.29:47894 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000412828s
[INFO] 10.244.0.29:53582 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000130294s
[INFO] 10.244.0.7:37205 - 20278 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.00048156s
[INFO] 10.244.0.7:37205 - 18843 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000824224s
[INFO] 10.244.0.7:37205 - 37572 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000106395s
[INFO] 10.244.0.7:37205 - 42345 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000222869s
[INFO] 10.244.0.7:37205 - 6623 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000146608s
[INFO] 10.244.0.7:37205 - 27288 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000093673s
[INFO] 10.244.0.7:37205 - 47481 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000157232s
[INFO] 10.244.0.7:37205 - 65471 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000168623s
[INFO] 10.244.0.7:38019 - 48214 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000090451s
[INFO] 10.244.0.7:38019 - 47930 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000109788s
[INFO] 10.244.0.7:42122 - 32670 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000107923s
[INFO] 10.244.0.7:42122 - 32447 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000147476s
[INFO] 10.244.0.7:57067 - 27166 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000155662s
[INFO] 10.244.0.7:57067 - 26976 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000092139s
[INFO] 10.244.0.7:49262 - 29665 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000051809s
[INFO] 10.244.0.7:49262 - 29855 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00007448s
==> describe nodes <==
Name: addons-093588
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-093588
kubernetes.io/os=linux
minikube.k8s.io/commit=128491876095763f75c6c62c8e8cebf09ad32ac8
minikube.k8s.io/name=addons-093588
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_12_02T11_31_19_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-093588
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 02 Dec 2024 11:31:16 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-093588
AcquireTime: <unset>
RenewTime: Mon, 02 Dec 2024 11:36:15 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 02 Dec 2024 11:34:22 +0000 Mon, 02 Dec 2024 11:31:14 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 02 Dec 2024 11:34:22 +0000 Mon, 02 Dec 2024 11:31:14 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 02 Dec 2024 11:34:22 +0000 Mon, 02 Dec 2024 11:31:14 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 02 Dec 2024 11:34:22 +0000 Mon, 02 Dec 2024 11:31:19 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.203
Hostname: addons-093588
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912788Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 3912788Ki
pods: 110
System Info:
Machine ID: 0b981ec46e284c639f1b7adc8d382e1a
System UUID: 0b981ec4-6e28-4c63-9f1b-7adc8d382e1a
Boot ID: df4ffb50-8889-4ff6-ab14-5cfc93566331
Kernel Version: 5.10.207
OS Image: Buildroot 2023.02.9
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.31.2
Kube-Proxy Version: v1.31.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (15 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m27s
default hello-world-app-55bf9c44b4-bq9jt 0 (0%) 0 (0%) 0 (0%) 0 (0%) 1s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m21s
ingress-nginx ingress-nginx-controller-5f85ff4588-jl9qn 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m43s
kube-system amd-gpu-device-plugin-9x4xz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m50s
kube-system coredns-7c65d6cfc9-sh425 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m52s
kube-system etcd-addons-093588 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4m57s
kube-system kube-apiserver-addons-093588 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m57s
kube-system kube-controller-manager-addons-093588 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4m57s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m49s
kube-system kube-proxy-8bqbx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m53s
kube-system kube-scheduler-addons-093588 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4m57s
kube-system metrics-server-84c5f94fbc-z5r8x 100m (5%) 0 (0%) 200Mi (5%) 0 (0%) 4m46s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m48s
local-path-storage local-path-provisioner-86d989889c-6bbl8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m47s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 0 (0%)
memory 460Mi (12%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m52s kube-proxy
Normal NodeHasSufficientMemory 5m4s (x8 over 5m4s) kubelet Node addons-093588 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m4s (x8 over 5m4s) kubelet Node addons-093588 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m4s (x7 over 5m4s) kubelet Node addons-093588 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 5m4s kubelet Updated Node Allocatable limit across pods
Normal Starting 4m58s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m58s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m57s kubelet Node addons-093588 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m57s kubelet Node addons-093588 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m57s kubelet Node addons-093588 status is now: NodeHasSufficientPID
Normal NodeReady 4m57s kubelet Node addons-093588 status is now: NodeReady
Normal RegisteredNode 4m53s node-controller Node addons-093588 event: Registered Node addons-093588 in Controller
==> dmesg <==
[ +6.481301] systemd-fstab-generator[1206]: Ignoring "noauto" option for root device
[ +0.076788] kauditd_printk_skb: 69 callbacks suppressed
[ +5.295485] kauditd_printk_skb: 21 callbacks suppressed
[ +0.458763] systemd-fstab-generator[1464]: Ignoring "noauto" option for root device
[ +4.667325] kauditd_printk_skb: 106 callbacks suppressed
[ +5.157468] kauditd_printk_skb: 127 callbacks suppressed
[ +6.980309] kauditd_printk_skb: 100 callbacks suppressed
[Dec 2 11:32] kauditd_printk_skb: 2 callbacks suppressed
[ +8.999753] kauditd_printk_skb: 12 callbacks suppressed
[ +5.944183] kauditd_printk_skb: 29 callbacks suppressed
[ +6.256346] kauditd_printk_skb: 17 callbacks suppressed
[ +5.021267] kauditd_printk_skb: 18 callbacks suppressed
[ +6.144991] kauditd_printk_skb: 49 callbacks suppressed
[ +6.049857] kauditd_printk_skb: 8 callbacks suppressed
[ +6.986101] kauditd_printk_skb: 11 callbacks suppressed
[ +10.155001] kauditd_printk_skb: 14 callbacks suppressed
[Dec 2 11:33] kauditd_printk_skb: 1 callbacks suppressed
[ +21.174772] kauditd_printk_skb: 25 callbacks suppressed
[ +5.028933] kauditd_printk_skb: 32 callbacks suppressed
[ +6.099758] kauditd_printk_skb: 62 callbacks suppressed
[ +15.240942] kauditd_printk_skb: 19 callbacks suppressed
[ +5.362197] kauditd_printk_skb: 9 callbacks suppressed
[Dec 2 11:34] kauditd_printk_skb: 32 callbacks suppressed
[ +6.790044] kauditd_printk_skb: 49 callbacks suppressed
[Dec 2 11:36] kauditd_printk_skb: 11 callbacks suppressed
==> etcd [5256bb6e86f1eaabc17c26882fc6f3963eb74f7b9150d179a9f02186f9066630] <==
{"level":"info","ts":"2024-12-02T11:32:31.509031Z","caller":"traceutil/trace.go:171","msg":"trace[1734125650] transaction","detail":"{read_only:false; response_revision:1081; number_of_response:1; }","duration":"413.800042ms","start":"2024-12-02T11:32:31.095224Z","end":"2024-12-02T11:32:31.509024Z","steps":["trace[1734125650] 'process raft request' (duration: 412.556295ms)"],"step_count":1}
{"level":"warn","ts":"2024-12-02T11:32:31.509318Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-02T11:32:31.095216Z","time spent":"413.887629ms","remote":"127.0.0.1:54862","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3133,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/jobs/gcp-auth/gcp-auth-certs-create\" mod_revision:837 > success:<request_put:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-create\" value_size:3080 >> failure:<request_range:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-create\" > >"}
{"level":"info","ts":"2024-12-02T11:32:31.509910Z","caller":"traceutil/trace.go:171","msg":"trace[1769726754] linearizableReadLoop","detail":"{readStateIndex:1110; appliedIndex:1108; }","duration":"317.515372ms","start":"2024-12-02T11:32:31.190760Z","end":"2024-12-02T11:32:31.508275Z","steps":["trace[1769726754] 'read index received' (duration: 316.176707ms)","trace[1769726754] 'applied index is now lower than readState.Index' (duration: 1.338186ms)"],"step_count":2}
{"level":"warn","ts":"2024-12-02T11:32:31.510039Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"319.348778ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-12-02T11:32:31.510230Z","caller":"traceutil/trace.go:171","msg":"trace[1905010634] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1081; }","duration":"319.54177ms","start":"2024-12-02T11:32:31.190681Z","end":"2024-12-02T11:32:31.510223Z","steps":["trace[1905010634] 'agreement among raft nodes before linearized reading' (duration: 319.33211ms)"],"step_count":1}
{"level":"warn","ts":"2024-12-02T11:32:31.510347Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-02T11:32:31.190637Z","time spent":"319.701654ms","remote":"127.0.0.1:54810","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
{"level":"warn","ts":"2024-12-02T11:32:31.510884Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"290.331323ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/yakd-dashboard/yakd-dashboard-67d98fc6b-hsrwm\" ","response":"range_response_count:1 size:4581"}
{"level":"info","ts":"2024-12-02T11:32:31.510999Z","caller":"traceutil/trace.go:171","msg":"trace[1506151935] range","detail":"{range_begin:/registry/pods/yakd-dashboard/yakd-dashboard-67d98fc6b-hsrwm; range_end:; response_count:1; response_revision:1081; }","duration":"290.452941ms","start":"2024-12-02T11:32:31.220538Z","end":"2024-12-02T11:32:31.510991Z","steps":["trace[1506151935] 'agreement among raft nodes before linearized reading' (duration: 290.098187ms)"],"step_count":1}
{"level":"warn","ts":"2024-12-02T11:32:31.511638Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.271954ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-12-02T11:32:31.511777Z","caller":"traceutil/trace.go:171","msg":"trace[41204636] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1081; }","duration":"223.410853ms","start":"2024-12-02T11:32:31.288359Z","end":"2024-12-02T11:32:31.511769Z","steps":["trace[41204636] 'agreement among raft nodes before linearized reading' (duration: 223.263454ms)"],"step_count":1}
{"level":"info","ts":"2024-12-02T11:32:40.977312Z","caller":"traceutil/trace.go:171","msg":"trace[1810761727] transaction","detail":"{read_only:false; response_revision:1123; number_of_response:1; }","duration":"124.053614ms","start":"2024-12-02T11:32:40.853243Z","end":"2024-12-02T11:32:40.977297Z","steps":["trace[1810761727] 'process raft request' (duration: 123.593438ms)"],"step_count":1}
{"level":"warn","ts":"2024-12-02T11:32:43.349291Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.790942ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3888176061930468756 > lease_revoke:<id:35f59387238af47d>","response":"size:27"}
{"level":"info","ts":"2024-12-02T11:32:43.349565Z","caller":"traceutil/trace.go:171","msg":"trace[978203883] linearizableReadLoop","detail":"{readStateIndex:1159; appliedIndex:1158; }","duration":"326.46397ms","start":"2024-12-02T11:32:43.023087Z","end":"2024-12-02T11:32:43.349551Z","steps":["trace[978203883] 'read index received' (duration: 176.281029ms)","trace[978203883] 'applied index is now lower than readState.Index' (duration: 150.108422ms)"],"step_count":2}
{"level":"warn","ts":"2024-12-02T11:32:43.349677Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"326.516213ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2024-12-02T11:32:43.349811Z","caller":"traceutil/trace.go:171","msg":"trace[1556617837] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1127; }","duration":"326.718412ms","start":"2024-12-02T11:32:43.023082Z","end":"2024-12-02T11:32:43.349800Z","steps":["trace[1556617837] 'agreement among raft nodes before linearized reading' (duration: 326.493012ms)"],"step_count":1}
{"level":"warn","ts":"2024-12-02T11:32:43.349891Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-02T11:32:43.023040Z","time spent":"326.836999ms","remote":"127.0.0.1:54810","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
{"level":"info","ts":"2024-12-02T11:33:25.193945Z","caller":"traceutil/trace.go:171","msg":"trace[1208210779] linearizableReadLoop","detail":"{readStateIndex:1400; appliedIndex:1399; }","duration":"208.352199ms","start":"2024-12-02T11:33:24.985579Z","end":"2024-12-02T11:33:25.193931Z","steps":["trace[1208210779] 'read index received' (duration: 208.251949ms)","trace[1208210779] 'applied index is now lower than readState.Index' (duration: 99.656µs)"],"step_count":2}
{"level":"info","ts":"2024-12-02T11:33:25.194034Z","caller":"traceutil/trace.go:171","msg":"trace[585938922] transaction","detail":"{read_only:false; response_revision:1356; number_of_response:1; }","duration":"373.401906ms","start":"2024-12-02T11:33:24.820626Z","end":"2024-12-02T11:33:25.194028Z","steps":["trace[585938922] 'process raft request' (duration: 373.198624ms)"],"step_count":1}
{"level":"warn","ts":"2024-12-02T11:33:25.194112Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-02T11:33:24.820609Z","time spent":"373.443109ms","remote":"127.0.0.1:54810","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4248,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-74013b2b-13f5-4c56-bebc-ca88a0c9e4c1\" mod_revision:1354 > success:<request_put:<key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-74013b2b-13f5-4c56-bebc-ca88a0c9e4c1\" value_size:4148 >> failure:<request_range:<key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-74013b2b-13f5-4c56-bebc-ca88a0c9e4c1\" > >"}
{"level":"warn","ts":"2024-12-02T11:33:25.194354Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"208.766538ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/registry-66c9cd494c-4dmpv.180d58d19a377856\" ","response":"range_response_count:1 size:826"}
{"level":"info","ts":"2024-12-02T11:33:25.194403Z","caller":"traceutil/trace.go:171","msg":"trace[1712045514] range","detail":"{range_begin:/registry/events/kube-system/registry-66c9cd494c-4dmpv.180d58d19a377856; range_end:; response_count:1; response_revision:1356; }","duration":"208.820614ms","start":"2024-12-02T11:33:24.985574Z","end":"2024-12-02T11:33:25.194395Z","steps":["trace[1712045514] 'agreement among raft nodes before linearized reading' (duration: 208.618556ms)"],"step_count":1}
{"level":"warn","ts":"2024-12-02T11:33:25.194540Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.491693ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/local-path-storage/helper-pod-create-pvc-74013b2b-13f5-4c56-bebc-ca88a0c9e4c1\" ","response":"range_response_count:1 size:4263"}
{"level":"info","ts":"2024-12-02T11:33:25.194787Z","caller":"traceutil/trace.go:171","msg":"trace[1843945180] range","detail":"{range_begin:/registry/pods/local-path-storage/helper-pod-create-pvc-74013b2b-13f5-4c56-bebc-ca88a0c9e4c1; range_end:; response_count:1; response_revision:1356; }","duration":"187.741861ms","start":"2024-12-02T11:33:25.007033Z","end":"2024-12-02T11:33:25.194775Z","steps":["trace[1843945180] 'agreement among raft nodes before linearized reading' (duration: 187.326707ms)"],"step_count":1}
{"level":"info","ts":"2024-12-02T11:33:46.232089Z","caller":"traceutil/trace.go:171","msg":"trace[564767909] transaction","detail":"{read_only:false; response_revision:1529; number_of_response:1; }","duration":"323.809998ms","start":"2024-12-02T11:33:45.908266Z","end":"2024-12-02T11:33:46.232076Z","steps":["trace[564767909] 'process raft request' (duration: 323.483126ms)"],"step_count":1}
{"level":"warn","ts":"2024-12-02T11:33:46.232222Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-02T11:33:45.908251Z","time spent":"323.913932ms","remote":"127.0.0.1:54798","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1524 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
==> kernel <==
11:36:16 up 5 min, 0 users, load average: 0.31, 1.00, 0.55
Linux addons-093588 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2023.02.9"
==> kube-apiserver [3e083dadde5b123c44d41a89d29ae5e3b62ad8b1353811941cba2214a716328c] <==
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
> logger="UnhandledError"
E1202 11:33:18.047549 1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.92.16:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.92.16:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.92.16:443: connect: connection refused" logger="UnhandledError"
E1202 11:33:18.054141 1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.92.16:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.92.16:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.92.16:443: connect: connection refused" logger="UnhandledError"
E1202 11:33:18.078468 1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.92.16:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.92.16:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.92.16:443: connect: connection refused" logger="UnhandledError"
I1202 11:33:18.164928 1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
I1202 11:33:20.557130 1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.110.149"}
I1202 11:33:50.188346 1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
W1202 11:33:51.225296 1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
I1202 11:33:53.732475 1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I1202 11:33:55.653397 1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
I1202 11:33:55.870562 1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.159.13"}
I1202 11:34:08.733165 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1202 11:34:08.733224 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1202 11:34:08.777393 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1202 11:34:08.778616 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1202 11:34:08.789293 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1202 11:34:08.789434 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1202 11:34:08.791466 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1202 11:34:08.791539 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1202 11:34:08.934431 1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
W1202 11:34:09.792203 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W1202 11:34:09.937879 1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W1202 11:34:09.937904 1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
I1202 11:36:15.262680 1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.26.244"}
==> kube-controller-manager [6bf4cf0d44bb80164410a59ec2d63ddecec0cd22ba61e826143ac7e5048dfe96] <==
E1202 11:34:42.751767 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W1202 11:34:46.163018 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 11:34:46.163054 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W1202 11:34:55.790553 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 11:34:55.790616 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W1202 11:35:16.266561 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 11:35:16.266596 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W1202 11:35:17.101236 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 11:35:17.101354 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W1202 11:35:18.787290 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 11:35:18.787391 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W1202 11:35:39.633090 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 11:35:39.633243 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W1202 11:35:47.738683 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 11:35:47.738802 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W1202 11:35:58.590855 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 11:35:58.590928 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
W1202 11:35:58.607597 1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1202 11:35:58.607631 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
I1202 11:36:15.082221 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="40.008333ms"
I1202 11:36:15.092214 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.839443ms"
I1202 11:36:15.092511 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="82.32µs"
I1202 11:36:15.106747 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="74.48µs"
I1202 11:36:16.591405 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.378277ms"
I1202 11:36:16.592234 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="31.676µs"
==> kube-proxy [28fe66023dde95d8f7e8873c7f0090dfa6587f0a0b99c6ef565e9d91cc3ba4d7] <==
add table ip kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^
>
E1202 11:31:24.341304 1 proxier.go:734] "Error cleaning up nftables rules" err=<
could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
add table ip6 kube-proxy
^^^^^^^^^^^^^^^^^^^^^^^^^
>
I1202 11:31:24.349628 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.203"]
E1202 11:31:24.349847 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1202 11:31:24.516756 1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
I1202 11:31:24.516794 1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1202 11:31:24.516840 1 server_linux.go:169] "Using iptables Proxier"
I1202 11:31:24.521066 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1202 11:31:24.521669 1 server.go:483] "Version info" version="v1.31.2"
I1202 11:31:24.521810 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1202 11:31:24.544631 1 config.go:199] "Starting service config controller"
I1202 11:31:24.544656 1 shared_informer.go:313] Waiting for caches to sync for service config
I1202 11:31:24.544760 1 config.go:105] "Starting endpoint slice config controller"
I1202 11:31:24.544767 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
I1202 11:31:24.546593 1 config.go:328] "Starting node config controller"
I1202 11:31:24.555031 1 shared_informer.go:313] Waiting for caches to sync for node config
I1202 11:31:24.644874 1 shared_informer.go:320] Caches are synced for endpoint slice config
I1202 11:31:24.644939 1 shared_informer.go:320] Caches are synced for service config
I1202 11:31:24.659340 1 shared_informer.go:320] Caches are synced for node config
==> kube-scheduler [9c587d7cc1d105cfeab88badd7f6ae51fe0893d36407a7daa5a20e1edb9f3b55] <==
W1202 11:31:16.359632 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E1202 11:31:16.359683 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W1202 11:31:16.359821 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1202 11:31:16.359854 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
W1202 11:31:16.359953 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1202 11:31:16.359994 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W1202 11:31:17.170036 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1202 11:31:17.170071 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
W1202 11:31:17.175313 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1202 11:31:17.175385 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
W1202 11:31:17.343360 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1202 11:31:17.343518 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
W1202 11:31:17.423570 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E1202 11:31:17.424077 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W1202 11:31:17.477506 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1202 11:31:17.477537 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
W1202 11:31:17.558906 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1202 11:31:17.559078 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W1202 11:31:17.571603 1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1202 11:31:17.571679 1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
W1202 11:31:17.575523 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1202 11:31:17.576010 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
W1202 11:31:17.576175 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1202 11:31:17.576213 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
I1202 11:31:19.948258 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Dec 02 11:36:15 addons-093588 kubelet[1213]: E1202 11:36:15.078398 1213 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5558e993-a5eb-47db-b72e-028a2df87321" containerName="liveness-probe"
Dec 02 11:36:15 addons-093588 kubelet[1213]: E1202 11:36:15.078481 1213 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d2473044-c394-4b78-8583-763661c9c329" containerName="registry-proxy"
Dec 02 11:36:15 addons-093588 kubelet[1213]: E1202 11:36:15.078515 1213 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1b9feacd-f2e4-41f7-abc9-06e472d66f0b" containerName="volume-snapshot-controller"
Dec 02 11:36:15 addons-093588 kubelet[1213]: E1202 11:36:15.078547 1213 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ea0e750d-7300-4238-9443-627b04eb650d" containerName="volume-snapshot-controller"
Dec 02 11:36:15 addons-093588 kubelet[1213]: E1202 11:36:15.078578 1213 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eacac2d8-005d-4f85-aa5f-5ee6725473a4" containerName="csi-resizer"
Dec 02 11:36:15 addons-093588 kubelet[1213]: E1202 11:36:15.078624 1213 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5558e993-a5eb-47db-b72e-028a2df87321" containerName="node-driver-registrar"
Dec 02 11:36:15 addons-093588 kubelet[1213]: E1202 11:36:15.078664 1213 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5558e993-a5eb-47db-b72e-028a2df87321" containerName="csi-external-health-monitor-controller"
Dec 02 11:36:15 addons-093588 kubelet[1213]: E1202 11:36:15.078754 1213 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="179b2fd0-56c9-4e0e-8288-e66d73594712" containerName="task-pv-container"
Dec 02 11:36:15 addons-093588 kubelet[1213]: E1202 11:36:15.078787 1213 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5558e993-a5eb-47db-b72e-028a2df87321" containerName="csi-provisioner"
Dec 02 11:36:15 addons-093588 kubelet[1213]: E1202 11:36:15.078825 1213 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4ba754ca-3bc4-4639-bbf2-9d771c422d1f" containerName="registry"
Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.078923 1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="5558e993-a5eb-47db-b72e-028a2df87321" containerName="csi-external-health-monitor-controller"
Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.078956 1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="5558e993-a5eb-47db-b72e-028a2df87321" containerName="csi-provisioner"
Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.078988 1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea0e750d-7300-4238-9443-627b04eb650d" containerName="volume-snapshot-controller"
Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.079024 1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="5558e993-a5eb-47db-b72e-028a2df87321" containerName="hostpath"
Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.079058 1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2473044-c394-4b78-8583-763661c9c329" containerName="registry-proxy"
Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.079097 1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="eacac2d8-005d-4f85-aa5f-5ee6725473a4" containerName="csi-resizer"
Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.079127 1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="5558e993-a5eb-47db-b72e-028a2df87321" containerName="liveness-probe"
Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.079157 1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="9090d43f-db00-4d9f-a761-7e784e7d66e9" containerName="csi-attacher"
Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.079186 1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="179b2fd0-56c9-4e0e-8288-e66d73594712" containerName="task-pv-container"
Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.079217 1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b9feacd-f2e4-41f7-abc9-06e472d66f0b" containerName="volume-snapshot-controller"
Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.079246 1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="5558e993-a5eb-47db-b72e-028a2df87321" containerName="node-driver-registrar"
Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.079281 1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="5558e993-a5eb-47db-b72e-028a2df87321" containerName="csi-snapshotter"
Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.079310 1213 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ba754ca-3bc4-4639-bbf2-9d771c422d1f" containerName="registry"
Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.111291 1213 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sw9v\" (UniqueName: \"kubernetes.io/projected/6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49-kube-api-access-5sw9v\") pod \"hello-world-app-55bf9c44b4-bq9jt\" (UID: \"6cd06bae-bc67-4ce6-9f1b-fa2d4ee11f49\") " pod="default/hello-world-app-55bf9c44b4-bq9jt"
Dec 02 11:36:15 addons-093588 kubelet[1213]: I1202 11:36:15.980246 1213 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-9x4xz" secret="" err="secret \"gcp-auth\" not found"
==> storage-provisioner [777ee197b7d2c034cf98316513d742f29c06eabfe4ae6b95718bbd9472d75328] <==
I1202 11:31:33.446653 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1202 11:31:33.475001 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1202 11:31:33.475232 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1202 11:31:33.571674 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1202 11:31:33.584238 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-093588_db6d9f13-b66b-4ee3-98aa-9e1906833c9b!
I1202 11:31:33.585297 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a34dd670-d034-4d97-b122-ad1727e6d2ec", APIVersion:"v1", ResourceVersion:"726", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-093588_db6d9f13-b66b-4ee3-98aa-9e1906833c9b became leader
I1202 11:31:33.684810 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-093588_db6d9f13-b66b-4ee3-98aa-9e1906833c9b!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-093588 -n addons-093588
helpers_test.go:261: (dbg) Run: kubectl --context addons-093588 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-7l67n ingress-nginx-admission-patch-s2pxw
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context addons-093588 describe pod ingress-nginx-admission-create-7l67n ingress-nginx-admission-patch-s2pxw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-093588 describe pod ingress-nginx-admission-create-7l67n ingress-nginx-admission-patch-s2pxw: exit status 1 (65.427052ms)
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-7l67n" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-s2pxw" not found
** /stderr **
helpers_test.go:279: kubectl --context addons-093588 describe pod ingress-nginx-admission-create-7l67n ingress-nginx-admission-patch-s2pxw: exit status 1
addons_test.go:992: (dbg) Run: out/minikube-linux-amd64 -p addons-093588 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Run: out/minikube-linux-amd64 -p addons-093588 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-093588 addons disable ingress --alsologtostderr -v=1: (7.695479411s)
--- FAIL: TestAddons/parallel/Ingress (150.96s)