=== RUN TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress
=== CONT TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run: kubectl --context addons-964416 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run: kubectl --context addons-964416 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run: kubectl --context addons-964416 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [16094845-c835-4494-a064-31053be1943b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [16094845-c835-4494-a064-31053be1943b] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.010586526s
I1123 08:13:55.084923 18055 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run: out/minikube-linux-amd64 -p addons-964416 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-964416 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.65419167s)
** stderr **
ssh: Process exited with status 28
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run: kubectl --context addons-964416 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run: out/minikube-linux-amd64 -p addons-964416 ip
addons_test.go:299: (dbg) Run: nslookup hello-john.test 192.168.39.198
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======> post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p addons-964416 -n addons-964416
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======> post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run: out/minikube-linux-amd64 -p addons-964416 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-964416 logs -n 25: (1.302038733s)
helpers_test.go:260: TestAddons/parallel/Ingress logs:
-- stdout --
==> Audit <==
┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ delete │ -p download-only-334487 │ download-only-334487 │ jenkins │ v1.37.0 │ 23 Nov 25 08:10 UTC │ 23 Nov 25 08:10 UTC │
│ start │ --download-only -p binary-mirror-588509 --alsologtostderr --binary-mirror http://127.0.0.1:36055 --driver=kvm2 --container-runtime=crio │ binary-mirror-588509 │ jenkins │ v1.37.0 │ 23 Nov 25 08:10 UTC │ │
│ delete │ -p binary-mirror-588509 │ binary-mirror-588509 │ jenkins │ v1.37.0 │ 23 Nov 25 08:10 UTC │ 23 Nov 25 08:10 UTC │
│ addons │ disable dashboard -p addons-964416 │ addons-964416 │ jenkins │ v1.37.0 │ 23 Nov 25 08:10 UTC │ │
│ addons │ enable dashboard -p addons-964416 │ addons-964416 │ jenkins │ v1.37.0 │ 23 Nov 25 08:10 UTC │ │
│ start │ -p addons-964416 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2 --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-964416 │ jenkins │ v1.37.0 │ 23 Nov 25 08:10 UTC │ 23 Nov 25 08:13 UTC │
│ addons │ addons-964416 addons disable volcano --alsologtostderr -v=1 │ addons-964416 │ jenkins │ v1.37.0 │ 23 Nov 25 08:13 UTC │ 23 Nov 25 08:13 UTC │
│ addons │ addons-964416 addons disable gcp-auth --alsologtostderr -v=1 │ addons-964416 │ jenkins │ v1.37.0 │ 23 Nov 25 08:13 UTC │ 23 Nov 25 08:13 UTC │
│ addons │ enable headlamp -p addons-964416 --alsologtostderr -v=1 │ addons-964416 │ jenkins │ v1.37.0 │ 23 Nov 25 08:13 UTC │ 23 Nov 25 08:13 UTC │
│ addons │ addons-964416 addons disable metrics-server --alsologtostderr -v=1 │ addons-964416 │ jenkins │ v1.37.0 │ 23 Nov 25 08:13 UTC │ 23 Nov 25 08:13 UTC │
│ addons │ addons-964416 addons disable nvidia-device-plugin --alsologtostderr -v=1 │ addons-964416 │ jenkins │ v1.37.0 │ 23 Nov 25 08:13 UTC │ 23 Nov 25 08:13 UTC │
│ addons │ addons-964416 addons disable headlamp --alsologtostderr -v=1 │ addons-964416 │ jenkins │ v1.37.0 │ 23 Nov 25 08:13 UTC │ 23 Nov 25 08:13 UTC │
│ ip │ addons-964416 ip │ addons-964416 │ jenkins │ v1.37.0 │ 23 Nov 25 08:13 UTC │ 23 Nov 25 08:13 UTC │
│ addons │ addons-964416 addons disable registry --alsologtostderr -v=1 │ addons-964416 │ jenkins │ v1.37.0 │ 23 Nov 25 08:13 UTC │ 23 Nov 25 08:13 UTC │
│ addons │ addons-964416 addons disable yakd --alsologtostderr -v=1 │ addons-964416 │ jenkins │ v1.37.0 │ 23 Nov 25 08:13 UTC │ 23 Nov 25 08:13 UTC │
│ ssh │ addons-964416 ssh cat /opt/local-path-provisioner/pvc-cd89c1fc-4685-472d-9496-2945ce215720_default_test-pvc/file1 │ addons-964416 │ jenkins │ v1.37.0 │ 23 Nov 25 08:13 UTC │ 23 Nov 25 08:13 UTC │
│ addons │ addons-964416 addons disable storage-provisioner-rancher --alsologtostderr -v=1 │ addons-964416 │ jenkins │ v1.37.0 │ 23 Nov 25 08:13 UTC │ 23 Nov 25 08:14 UTC │
│ addons │ addons-964416 addons disable inspektor-gadget --alsologtostderr -v=1 │ addons-964416 │ jenkins │ v1.37.0 │ 23 Nov 25 08:13 UTC │ 23 Nov 25 08:14 UTC │
│ ssh │ addons-964416 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com' │ addons-964416 │ jenkins │ v1.37.0 │ 23 Nov 25 08:13 UTC │ │
│ addons │ addons-964416 addons disable cloud-spanner --alsologtostderr -v=1 │ addons-964416 │ jenkins │ v1.37.0 │ 23 Nov 25 08:14 UTC │ 23 Nov 25 08:14 UTC │
│ addons │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-964416 │ addons-964416 │ jenkins │ v1.37.0 │ 23 Nov 25 08:14 UTC │ 23 Nov 25 08:14 UTC │
│ addons │ addons-964416 addons disable registry-creds --alsologtostderr -v=1 │ addons-964416 │ jenkins │ v1.37.0 │ 23 Nov 25 08:14 UTC │ 23 Nov 25 08:14 UTC │
│ addons │ addons-964416 addons disable volumesnapshots --alsologtostderr -v=1 │ addons-964416 │ jenkins │ v1.37.0 │ 23 Nov 25 08:14 UTC │ 23 Nov 25 08:14 UTC │
│ addons │ addons-964416 addons disable csi-hostpath-driver --alsologtostderr -v=1 │ addons-964416 │ jenkins │ v1.37.0 │ 23 Nov 25 08:14 UTC │ 23 Nov 25 08:14 UTC │
│ ip │ addons-964416 ip │ addons-964416 │ jenkins │ v1.37.0 │ 23 Nov 25 08:16 UTC │ 23 Nov 25 08:16 UTC │
└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/11/23 08:10:58
Running on machine: ubuntu-20-agent-4
Binary: Built with gc go1.25.3 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1123 08:10:58.036860 18653 out.go:360] Setting OutFile to fd 1 ...
I1123 08:10:58.036939 18653 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:10:58.036947 18653 out.go:374] Setting ErrFile to fd 2...
I1123 08:10:58.036951 18653 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:10:58.037107 18653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-14048/.minikube/bin
I1123 08:10:58.037615 18653 out.go:368] Setting JSON to false
I1123 08:10:58.038378 18653 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3207,"bootTime":1763882251,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1123 08:10:58.038440 18653 start.go:143] virtualization: kvm guest
I1123 08:10:58.040243 18653 out.go:179] * [addons-964416] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1123 08:10:58.041460 18653 out.go:179] - MINIKUBE_LOCATION=21969
I1123 08:10:58.041472 18653 notify.go:221] Checking for updates...
I1123 08:10:58.043934 18653 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1123 08:10:58.045121 18653 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/21969-14048/kubeconfig
I1123 08:10:58.046299 18653 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-14048/.minikube
I1123 08:10:58.047483 18653 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1123 08:10:58.048567 18653 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1123 08:10:58.049864 18653 driver.go:422] Setting default libvirt URI to qemu:///system
I1123 08:10:58.079490 18653 out.go:179] * Using the kvm2 driver based on user configuration
I1123 08:10:58.080607 18653 start.go:309] selected driver: kvm2
I1123 08:10:58.080617 18653 start.go:927] validating driver "kvm2" against <nil>
I1123 08:10:58.080627 18653 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1123 08:10:58.081236 18653 start_flags.go:327] no existing cluster config was found, will generate one from the flags
I1123 08:10:58.081452 18653 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1123 08:10:58.081498 18653 cni.go:84] Creating CNI manager for ""
I1123 08:10:58.081549 18653 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1123 08:10:58.081559 18653 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I1123 08:10:58.081610 18653 start.go:353] cluster config:
{Name:addons-964416 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-964416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
I1123 08:10:58.081720 18653 iso.go:125] acquiring lock: {Name:mk4b6da1d874cbf82d9df128fb5e9a0d9b7ea794 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1123 08:10:58.083019 18653 out.go:179] * Starting "addons-964416" primary control-plane node in "addons-964416" cluster
I1123 08:10:58.084211 18653 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1123 08:10:58.084234 18653 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-14048/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
I1123 08:10:58.084240 18653 cache.go:65] Caching tarball of preloaded images
I1123 08:10:58.084325 18653 preload.go:238] Found /home/jenkins/minikube-integration/21969-14048/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1123 08:10:58.084334 18653 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
I1123 08:10:58.084637 18653 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/config.json ...
I1123 08:10:58.084658 18653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/config.json: {Name:mkf7d715d976f8cb8c0bc303642b8a0651fc1f32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1123 08:10:58.084776 18653 start.go:360] acquireMachinesLock for addons-964416: {Name:mk2573900f00f8e3cbe200607276d61a844e85b7 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1123 08:10:58.084821 18653 start.go:364] duration metric: took 33.261µs to acquireMachinesLock for "addons-964416"
I1123 08:10:58.084837 18653 start.go:93] Provisioning new machine with config: &{Name:addons-964416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-964416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I1123 08:10:58.084878 18653 start.go:125] createHost starting for "" (driver="kvm2")
I1123 08:10:58.086737 18653 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
I1123 08:10:58.086876 18653 start.go:159] libmachine.API.Create for "addons-964416" (driver="kvm2")
I1123 08:10:58.086904 18653 client.go:173] LocalClient.Create starting
I1123 08:10:58.086971 18653 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21969-14048/.minikube/certs/ca.pem
I1123 08:10:58.286256 18653 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21969-14048/.minikube/certs/cert.pem
I1123 08:10:58.372382 18653 main.go:143] libmachine: creating domain...
I1123 08:10:58.372406 18653 main.go:143] libmachine: creating network...
I1123 08:10:58.373672 18653 main.go:143] libmachine: found existing default network
I1123 08:10:58.373848 18653 main.go:143] libmachine: <network>
<name>default</name>
<uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address='52:54:00:10:a2:1d'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
I1123 08:10:58.374377 18653 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d9e4f0}
I1123 08:10:58.374483 18653 main.go:143] libmachine: defining private network:
<network>
<name>mk-addons-964416</name>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1123 08:10:58.380291 18653 main.go:143] libmachine: creating private network mk-addons-964416 192.168.39.0/24...
I1123 08:10:58.442042 18653 main.go:143] libmachine: private network mk-addons-964416 192.168.39.0/24 created
I1123 08:10:58.442335 18653 main.go:143] libmachine: <network>
<name>mk-addons-964416</name>
<uuid>71fd788f-ed2f-4bfe-aa4f-90ed1672fe6a</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:ec:79:b3'/>
<dns enable='no'/>
<ip address='192.168.39.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.39.2' end='192.168.39.253'/>
</dhcp>
</ip>
</network>
I1123 08:10:58.442364 18653 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416 ...
I1123 08:10:58.442384 18653 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21969-14048/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
I1123 08:10:58.442393 18653 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21969-14048/.minikube
I1123 08:10:58.442449 18653 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21969-14048/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21969-14048/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso...
I1123 08:10:58.693544 18653 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa...
I1123 08:10:58.761884 18653 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/addons-964416.rawdisk...
I1123 08:10:58.761930 18653 main.go:143] libmachine: Writing magic tar header
I1123 08:10:58.761953 18653 main.go:143] libmachine: Writing SSH key tar header
I1123 08:10:58.762029 18653 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416 ...
I1123 08:10:58.762095 18653 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416
I1123 08:10:58.762130 18653 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416 (perms=drwx------)
I1123 08:10:58.762147 18653 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21969-14048/.minikube/machines
I1123 08:10:58.762161 18653 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21969-14048/.minikube/machines (perms=drwxr-xr-x)
I1123 08:10:58.762174 18653 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21969-14048/.minikube
I1123 08:10:58.762187 18653 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21969-14048/.minikube (perms=drwxr-xr-x)
I1123 08:10:58.762201 18653 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21969-14048
I1123 08:10:58.762211 18653 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21969-14048 (perms=drwxrwxr-x)
I1123 08:10:58.762219 18653 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
I1123 08:10:58.762227 18653 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
I1123 08:10:58.762238 18653 main.go:143] libmachine: checking permissions on dir: /home/jenkins
I1123 08:10:58.762246 18653 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
I1123 08:10:58.762254 18653 main.go:143] libmachine: checking permissions on dir: /home
I1123 08:10:58.762262 18653 main.go:143] libmachine: skipping /home - not owner
I1123 08:10:58.762266 18653 main.go:143] libmachine: defining domain...
I1123 08:10:58.763412 18653 main.go:143] libmachine: defining domain using XML:
<domain type='kvm'>
<name>addons-964416</name>
<memory unit='MiB'>4096</memory>
<vcpu>2</vcpu>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough'>
</cpu>
<os>
<type>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<devices>
<disk type='file' device='cdrom'>
<source file='/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='default' io='threads' />
<source file='/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/addons-964416.rawdisk'/>
<target dev='hda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='mk-addons-964416'/>
<model type='virtio'/>
</interface>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
</devices>
</domain>
I1123 08:10:58.770763 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:bd:1f:e4 in network default
I1123 08:10:58.771292 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:10:58.771310 18653 main.go:143] libmachine: starting domain...
I1123 08:10:58.771314 18653 main.go:143] libmachine: ensuring networks are active...
I1123 08:10:58.771925 18653 main.go:143] libmachine: Ensuring network default is active
I1123 08:10:58.772212 18653 main.go:143] libmachine: Ensuring network mk-addons-964416 is active
I1123 08:10:58.772783 18653 main.go:143] libmachine: getting domain XML...
I1123 08:10:58.773767 18653 main.go:143] libmachine: starting domain XML:
<domain type='kvm'>
<name>addons-964416</name>
<uuid>198921e3-3bb9-4b45-9dea-69ff479a7843</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
<boot dev='cdrom'/>
<boot dev='hd'/>
<bootmenu enable='no'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<cpu mode='host-passthrough' check='none' migratable='on'/>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/boot2docker.iso'/>
<target dev='hdc' bus='scsi'/>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' io='threads'/>
<source file='/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/addons-964416.rawdisk'/>
<target dev='hda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='scsi' index='0' model='lsilogic'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:e8:75:8f'/>
<source network='mk-addons-964416'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</interface>
<interface type='network'>
<mac address='52:54:00:bd:1f:e4'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<audio id='1' type='none'/>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</rng>
</devices>
</domain>
I1123 08:10:59.200242 18653 main.go:143] libmachine: waiting for domain to start...
I1123 08:10:59.201336 18653 main.go:143] libmachine: domain is now running
I1123 08:10:59.201352 18653 main.go:143] libmachine: waiting for IP...
I1123 08:10:59.202050 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:10:59.202419 18653 main.go:143] libmachine: no network interface addresses found for domain addons-964416 (source=lease)
I1123 08:10:59.202432 18653 main.go:143] libmachine: trying to list again with source=arp
I1123 08:10:59.202663 18653 main.go:143] libmachine: unable to find current IP address of domain addons-964416 in network mk-addons-964416 (interfaces detected: [])
I1123 08:10:59.202715 18653 retry.go:31] will retry after 205.989952ms: waiting for domain to come up
I1123 08:10:59.410172 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:10:59.410663 18653 main.go:143] libmachine: no network interface addresses found for domain addons-964416 (source=lease)
I1123 08:10:59.410677 18653 main.go:143] libmachine: trying to list again with source=arp
I1123 08:10:59.410917 18653 main.go:143] libmachine: unable to find current IP address of domain addons-964416 in network mk-addons-964416 (interfaces detected: [])
I1123 08:10:59.410965 18653 retry.go:31] will retry after 267.84973ms: waiting for domain to come up
I1123 08:10:59.680513 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:10:59.680952 18653 main.go:143] libmachine: no network interface addresses found for domain addons-964416 (source=lease)
I1123 08:10:59.680966 18653 main.go:143] libmachine: trying to list again with source=arp
I1123 08:10:59.681180 18653 main.go:143] libmachine: unable to find current IP address of domain addons-964416 in network mk-addons-964416 (interfaces detected: [])
I1123 08:10:59.681206 18653 retry.go:31] will retry after 477.98669ms: waiting for domain to come up
I1123 08:11:00.160923 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:00.161450 18653 main.go:143] libmachine: no network interface addresses found for domain addons-964416 (source=lease)
I1123 08:11:00.161481 18653 main.go:143] libmachine: trying to list again with source=arp
I1123 08:11:00.161775 18653 main.go:143] libmachine: unable to find current IP address of domain addons-964416 in network mk-addons-964416 (interfaces detected: [])
I1123 08:11:00.161808 18653 retry.go:31] will retry after 471.610526ms: waiting for domain to come up
I1123 08:11:00.635573 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:00.636080 18653 main.go:143] libmachine: no network interface addresses found for domain addons-964416 (source=lease)
I1123 08:11:00.636095 18653 main.go:143] libmachine: trying to list again with source=arp
I1123 08:11:00.636344 18653 main.go:143] libmachine: unable to find current IP address of domain addons-964416 in network mk-addons-964416 (interfaces detected: [])
I1123 08:11:00.636385 18653 retry.go:31] will retry after 542.4133ms: waiting for domain to come up
I1123 08:11:01.180105 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:01.180624 18653 main.go:143] libmachine: no network interface addresses found for domain addons-964416 (source=lease)
I1123 08:11:01.180642 18653 main.go:143] libmachine: trying to list again with source=arp
I1123 08:11:01.180952 18653 main.go:143] libmachine: unable to find current IP address of domain addons-964416 in network mk-addons-964416 (interfaces detected: [])
I1123 08:11:01.180989 18653 retry.go:31] will retry after 703.526723ms: waiting for domain to come up
I1123 08:11:01.885695 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:01.886173 18653 main.go:143] libmachine: no network interface addresses found for domain addons-964416 (source=lease)
I1123 08:11:01.886186 18653 main.go:143] libmachine: trying to list again with source=arp
I1123 08:11:01.886454 18653 main.go:143] libmachine: unable to find current IP address of domain addons-964416 in network mk-addons-964416 (interfaces detected: [])
I1123 08:11:01.886506 18653 retry.go:31] will retry after 909.542016ms: waiting for domain to come up
I1123 08:11:02.797278 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:02.797806 18653 main.go:143] libmachine: no network interface addresses found for domain addons-964416 (source=lease)
I1123 08:11:02.797824 18653 main.go:143] libmachine: trying to list again with source=arp
I1123 08:11:02.798072 18653 main.go:143] libmachine: unable to find current IP address of domain addons-964416 in network mk-addons-964416 (interfaces detected: [])
I1123 08:11:02.798105 18653 retry.go:31] will retry after 1.192874427s: waiting for domain to come up
I1123 08:11:03.992911 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:03.993501 18653 main.go:143] libmachine: no network interface addresses found for domain addons-964416 (source=lease)
I1123 08:11:03.993520 18653 main.go:143] libmachine: trying to list again with source=arp
I1123 08:11:03.993793 18653 main.go:143] libmachine: unable to find current IP address of domain addons-964416 in network mk-addons-964416 (interfaces detected: [])
I1123 08:11:03.993827 18653 retry.go:31] will retry after 1.248389295s: waiting for domain to come up
I1123 08:11:05.244214 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:05.244760 18653 main.go:143] libmachine: no network interface addresses found for domain addons-964416 (source=lease)
I1123 08:11:05.244777 18653 main.go:143] libmachine: trying to list again with source=arp
I1123 08:11:05.245052 18653 main.go:143] libmachine: unable to find current IP address of domain addons-964416 in network mk-addons-964416 (interfaces detected: [])
I1123 08:11:05.245084 18653 retry.go:31] will retry after 1.651266277s: waiting for domain to come up
I1123 08:11:06.898820 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:06.899378 18653 main.go:143] libmachine: no network interface addresses found for domain addons-964416 (source=lease)
I1123 08:11:06.899390 18653 main.go:143] libmachine: trying to list again with source=arp
I1123 08:11:06.899705 18653 main.go:143] libmachine: unable to find current IP address of domain addons-964416 in network mk-addons-964416 (interfaces detected: [])
I1123 08:11:06.899727 18653 retry.go:31] will retry after 2.501950947s: waiting for domain to come up
I1123 08:11:09.403560 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:09.404138 18653 main.go:143] libmachine: no network interface addresses found for domain addons-964416 (source=lease)
I1123 08:11:09.404156 18653 main.go:143] libmachine: trying to list again with source=arp
I1123 08:11:09.404482 18653 main.go:143] libmachine: unable to find current IP address of domain addons-964416 in network mk-addons-964416 (interfaces detected: [])
I1123 08:11:09.404524 18653 retry.go:31] will retry after 2.547751799s: waiting for domain to come up
I1123 08:11:11.953413 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:11.953888 18653 main.go:143] libmachine: no network interface addresses found for domain addons-964416 (source=lease)
I1123 08:11:11.953900 18653 main.go:143] libmachine: trying to list again with source=arp
I1123 08:11:11.954167 18653 main.go:143] libmachine: unable to find current IP address of domain addons-964416 in network mk-addons-964416 (interfaces detected: [])
I1123 08:11:11.954191 18653 retry.go:31] will retry after 3.765225681s: waiting for domain to come up
I1123 08:11:15.722527 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:15.723057 18653 main.go:143] libmachine: domain addons-964416 has current primary IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:15.723070 18653 main.go:143] libmachine: found domain IP: 192.168.39.198
I1123 08:11:15.723076 18653 main.go:143] libmachine: reserving static IP address...
I1123 08:11:15.723458 18653 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-964416", mac: "52:54:00:e8:75:8f", ip: "192.168.39.198"} in network mk-addons-964416
I1123 08:11:15.897620 18653 main.go:143] libmachine: reserved static IP address 192.168.39.198 for domain addons-964416
I1123 08:11:15.897647 18653 main.go:143] libmachine: waiting for SSH...
I1123 08:11:15.897654 18653 main.go:143] libmachine: Getting to WaitForSSH function...
I1123 08:11:15.900288 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:15.900789 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:15.900818 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:15.900981 18653 main.go:143] libmachine: Using SSH client type: native
I1123 08:11:15.901180 18653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.198 22 <nil> <nil>}
I1123 08:11:15.901195 18653 main.go:143] libmachine: About to run SSH command:
exit 0
I1123 08:11:16.014135 18653 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1123 08:11:16.014540 18653 main.go:143] libmachine: domain creation complete
I1123 08:11:16.016018 18653 machine.go:94] provisionDockerMachine start ...
I1123 08:11:16.018144 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:16.018554 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:16.018584 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:16.018747 18653 main.go:143] libmachine: Using SSH client type: native
I1123 08:11:16.018954 18653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.198 22 <nil> <nil>}
I1123 08:11:16.018968 18653 main.go:143] libmachine: About to run SSH command:
hostname
I1123 08:11:16.130854 18653 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
I1123 08:11:16.130878 18653 buildroot.go:166] provisioning hostname "addons-964416"
I1123 08:11:16.133669 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:16.134073 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:16.134094 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:16.134246 18653 main.go:143] libmachine: Using SSH client type: native
I1123 08:11:16.134452 18653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.198 22 <nil> <nil>}
I1123 08:11:16.134478 18653 main.go:143] libmachine: About to run SSH command:
sudo hostname addons-964416 && echo "addons-964416" | sudo tee /etc/hostname
I1123 08:11:16.265167 18653 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-964416
I1123 08:11:16.267795 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:16.268099 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:16.268118 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:16.268296 18653 main.go:143] libmachine: Using SSH client type: native
I1123 08:11:16.268503 18653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.198 22 <nil> <nil>}
I1123 08:11:16.268518 18653 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\saddons-964416' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-964416/g' /etc/hosts;
else
echo '127.0.1.1 addons-964416' | sudo tee -a /etc/hosts;
fi
fi
I1123 08:11:16.392545 18653 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1123 08:11:16.392576 18653 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21969-14048/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-14048/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-14048/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-14048/.minikube}
I1123 08:11:16.392612 18653 buildroot.go:174] setting up certificates
I1123 08:11:16.392627 18653 provision.go:84] configureAuth start
I1123 08:11:16.395130 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:16.395494 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:16.395520 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:16.397512 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:16.397787 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:16.397810 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:16.397940 18653 provision.go:143] copyHostCerts
I1123 08:11:16.398013 18653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-14048/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-14048/.minikube/ca.pem (1082 bytes)
I1123 08:11:16.398124 18653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-14048/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-14048/.minikube/cert.pem (1123 bytes)
I1123 08:11:16.398207 18653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-14048/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-14048/.minikube/key.pem (1675 bytes)
I1123 08:11:16.398289 18653 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-14048/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-14048/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-14048/.minikube/certs/ca-key.pem org=jenkins.addons-964416 san=[127.0.0.1 192.168.39.198 addons-964416 localhost minikube]
I1123 08:11:16.483278 18653 provision.go:177] copyRemoteCerts
I1123 08:11:16.483341 18653 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1123 08:11:16.485737 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:16.486095 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:16.486134 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:16.486267 18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
I1123 08:11:16.573503 18653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1123 08:11:16.602745 18653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
I1123 08:11:16.630774 18653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1123 08:11:16.659644 18653 provision.go:87] duration metric: took 267.000965ms to configureAuth
I1123 08:11:16.659677 18653 buildroot.go:189] setting minikube options for container-runtime
I1123 08:11:16.659913 18653 config.go:182] Loaded profile config "addons-964416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:11:16.662198 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:16.662572 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:16.662602 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:16.662790 18653 main.go:143] libmachine: Using SSH client type: native
I1123 08:11:16.662977 18653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.198 22 <nil> <nil>}
I1123 08:11:16.662991 18653 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1123 08:11:16.916158 18653 main.go:143] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1123 08:11:16.916190 18653 machine.go:97] duration metric: took 900.154416ms to provisionDockerMachine
I1123 08:11:16.916204 18653 client.go:176] duration metric: took 18.829290568s to LocalClient.Create
I1123 08:11:16.916227 18653 start.go:167] duration metric: took 18.829349595s to libmachine.API.Create "addons-964416"
I1123 08:11:16.916238 18653 start.go:293] postStartSetup for "addons-964416" (driver="kvm2")
I1123 08:11:16.916255 18653 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1123 08:11:16.916354 18653 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1123 08:11:16.918849 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:16.919244 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:16.919265 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:16.919377 18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
I1123 08:11:17.007648 18653 ssh_runner.go:195] Run: cat /etc/os-release
I1123 08:11:17.012596 18653 info.go:137] Remote host: Buildroot 2025.02
I1123 08:11:17.012618 18653 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-14048/.minikube/addons for local assets ...
I1123 08:11:17.012668 18653 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-14048/.minikube/files for local assets ...
I1123 08:11:17.012692 18653 start.go:296] duration metric: took 96.443453ms for postStartSetup
I1123 08:11:17.047765 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:17.048060 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:17.048079 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:17.048253 18653 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/config.json ...
I1123 08:11:17.048427 18653 start.go:128] duration metric: took 18.963529863s to createHost
I1123 08:11:17.050836 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:17.051661 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:17.051693 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:17.051888 18653 main.go:143] libmachine: Using SSH client type: native
I1123 08:11:17.052098 18653 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil> [] 0s} 192.168.39.198 22 <nil> <nil>}
I1123 08:11:17.052111 18653 main.go:143] libmachine: About to run SSH command:
date +%s.%N
I1123 08:11:17.165872 18653 main.go:143] libmachine: SSH cmd err, output: <nil>: 1763885477.130700134
I1123 08:11:17.165896 18653 fix.go:216] guest clock: 1763885477.130700134
I1123 08:11:17.165903 18653 fix.go:229] Guest: 2025-11-23 08:11:17.130700134 +0000 UTC Remote: 2025-11-23 08:11:17.048438717 +0000 UTC m=+19.056022171 (delta=82.261417ms)
I1123 08:11:17.165919 18653 fix.go:200] guest clock delta is within tolerance: 82.261417ms
I1123 08:11:17.165924 18653 start.go:83] releasing machines lock for "addons-964416", held for 19.081095343s
I1123 08:11:17.168830 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:17.169234 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:17.169256 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:17.169808 18653 ssh_runner.go:195] Run: cat /version.json
I1123 08:11:17.169904 18653 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1123 08:11:17.172843 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:17.172885 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:17.173244 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:17.173264 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:17.173311 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:17.173342 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:17.173418 18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
I1123 08:11:17.173644 18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
I1123 08:11:17.280421 18653 ssh_runner.go:195] Run: systemctl --version
I1123 08:11:17.286923 18653 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
I1123 08:11:17.443209 18653 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1123 08:11:17.450509 18653 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1123 08:11:17.450575 18653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1123 08:11:17.470583 18653 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I1123 08:11:17.470610 18653 start.go:496] detecting cgroup driver to use...
I1123 08:11:17.470673 18653 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1123 08:11:17.488970 18653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1123 08:11:17.505149 18653 docker.go:218] disabling cri-docker service (if available) ...
I1123 08:11:17.505201 18653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1123 08:11:17.522232 18653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1123 08:11:17.538429 18653 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1123 08:11:17.681162 18653 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1123 08:11:17.886230 18653 docker.go:234] disabling docker service ...
I1123 08:11:17.886312 18653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1123 08:11:17.902807 18653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1123 08:11:17.917113 18653 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1123 08:11:18.073262 18653 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1123 08:11:18.213337 18653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1123 08:11:18.228778 18653 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1123 08:11:18.252090 18653 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
I1123 08:11:18.252154 18653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
I1123 08:11:18.264270 18653 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
I1123 08:11:18.264350 18653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
I1123 08:11:18.276544 18653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
I1123 08:11:18.288927 18653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
I1123 08:11:18.301013 18653 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1123 08:11:18.313584 18653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
I1123 08:11:18.325701 18653 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
I1123 08:11:18.345650 18653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
I1123 08:11:18.357887 18653 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1123 08:11:18.367953 18653 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
stdout:
stderr:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
I1123 08:11:18.367991 18653 ssh_runner.go:195] Run: sudo modprobe br_netfilter
I1123 08:11:18.390599 18653 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1123 08:11:18.404998 18653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1123 08:11:18.548395 18653 ssh_runner.go:195] Run: sudo systemctl restart crio
I1123 08:11:18.660875 18653 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
I1123 08:11:18.660971 18653 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
I1123 08:11:18.666411 18653 start.go:564] Will wait 60s for crictl version
I1123 08:11:18.666484 18653 ssh_runner.go:195] Run: which crictl
I1123 08:11:18.670714 18653 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1123 08:11:18.709018 18653 start.go:580] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.29.1
RuntimeApiVersion: v1
I1123 08:11:18.709134 18653 ssh_runner.go:195] Run: crio --version
I1123 08:11:18.738295 18653 ssh_runner.go:195] Run: crio --version
I1123 08:11:18.769287 18653 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
I1123 08:11:18.772706 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:18.773150 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:18.773173 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:18.773395 18653 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1123 08:11:18.778010 18653 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1123 08:11:18.793347 18653 kubeadm.go:884] updating cluster {Name:addons-964416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-964416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1123 08:11:18.793522 18653 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1123 08:11:18.793570 18653 ssh_runner.go:195] Run: sudo crictl images --output json
I1123 08:11:18.823817 18653 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
I1123 08:11:18.823886 18653 ssh_runner.go:195] Run: which lz4
I1123 08:11:18.828243 18653 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
I1123 08:11:18.832970 18653 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/preloaded.tar.lz4': No such file or directory
I1123 08:11:18.833001 18653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
I1123 08:11:20.277620 18653 crio.go:462] duration metric: took 1.449416073s to copy over tarball
I1123 08:11:20.277695 18653 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
I1123 08:11:21.895625 18653 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.617907972s)
I1123 08:11:21.895650 18653 crio.go:469] duration metric: took 1.618002394s to extract the tarball
I1123 08:11:21.895657 18653 ssh_runner.go:146] rm: /preloaded.tar.lz4
I1123 08:11:21.936673 18653 ssh_runner.go:195] Run: sudo crictl images --output json
I1123 08:11:21.979082 18653 crio.go:514] all images are preloaded for cri-o runtime.
I1123 08:11:21.979107 18653 cache_images.go:86] Images are preloaded, skipping loading
I1123 08:11:21.979116 18653 kubeadm.go:935] updating node { 192.168.39.198 8443 v1.34.1 crio true true} ...
I1123 08:11:21.979206 18653 kubeadm.go:947] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-964416 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.198
[Install]
config:
{KubernetesVersion:v1.34.1 ClusterName:addons-964416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1123 08:11:21.979289 18653 ssh_runner.go:195] Run: crio config
I1123 08:11:22.025180 18653 cni.go:84] Creating CNI manager for ""
I1123 08:11:22.025211 18653 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1123 08:11:22.025231 18653 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1123 08:11:22.025253 18653 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.198 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-964416 NodeName:addons-964416 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1123 08:11:22.025364 18653 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.198
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/crio/crio.sock
name: "addons-964416"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.198"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.198"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1123 08:11:22.025449 18653 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
I1123 08:11:22.037734 18653 binaries.go:51] Found k8s binaries, skipping transfer
I1123 08:11:22.037805 18653 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1123 08:11:22.049522 18653 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
I1123 08:11:22.070033 18653 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1123 08:11:22.090976 18653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
I1123 08:11:22.111193 18653 ssh_runner.go:195] Run: grep 192.168.39.198 control-plane.minikube.internal$ /etc/hosts
I1123 08:11:22.115527 18653 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.198 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1123 08:11:22.130414 18653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1123 08:11:22.270003 18653 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1123 08:11:22.291583 18653 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416 for IP: 192.168.39.198
I1123 08:11:22.291611 18653 certs.go:195] generating shared ca certs ...
I1123 08:11:22.291630 18653 certs.go:227] acquiring lock for ca certs: {Name:mkaeb9dc4e066e858e41c686c8e5e48e63a99316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1123 08:11:22.291792 18653 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-14048/.minikube/ca.key
I1123 08:11:22.347850 18653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-14048/.minikube/ca.crt ...
I1123 08:11:22.347878 18653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-14048/.minikube/ca.crt: {Name:mk20cfbbe0e260e30b971f49e8bd6543e0947bb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1123 08:11:22.348038 18653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-14048/.minikube/ca.key ...
I1123 08:11:22.348050 18653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-14048/.minikube/ca.key: {Name:mkfe70366891274ede47b02e24442af5d9af5d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1123 08:11:22.348123 18653 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-14048/.minikube/proxy-client-ca.key
I1123 08:11:22.386591 18653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-14048/.minikube/proxy-client-ca.crt ...
I1123 08:11:22.386614 18653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-14048/.minikube/proxy-client-ca.crt: {Name:mk2ee3c3942cc0dc5ef41beb046bb819150fd46c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1123 08:11:22.386750 18653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-14048/.minikube/proxy-client-ca.key ...
I1123 08:11:22.386761 18653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-14048/.minikube/proxy-client-ca.key: {Name:mk4c03b1697eedd6395db853d5b6d9005823b710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1123 08:11:22.386827 18653 certs.go:257] generating profile certs ...
I1123 08:11:22.386880 18653 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.key
I1123 08:11:22.386895 18653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt with IP's: []
I1123 08:11:22.417398 18653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt ...
I1123 08:11:22.417423 18653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.crt: {Name:mk4910134fd4bedd14eed21e7416eb0cf90b1a02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1123 08:11:22.417575 18653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.key ...
I1123 08:11:22.417587 18653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/client.key: {Name:mk0ec0189c24fb5bd4b3c1ce690a2cbadff79af1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1123 08:11:22.417656 18653 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/apiserver.key.b74a7a8c
I1123 08:11:22.417673 18653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/apiserver.crt.b74a7a8c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.198]
I1123 08:11:22.591814 18653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/apiserver.crt.b74a7a8c ...
I1123 08:11:22.591843 18653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/apiserver.crt.b74a7a8c: {Name:mkdb5363ba8b730bdb44382a62f62248c73d959d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1123 08:11:22.592001 18653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/apiserver.key.b74a7a8c ...
I1123 08:11:22.592015 18653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/apiserver.key.b74a7a8c: {Name:mk6d8e84fba8b4506b05c6b5a5a0a33ed018c927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1123 08:11:22.592095 18653 certs.go:382] copying /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/apiserver.crt.b74a7a8c -> /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/apiserver.crt
I1123 08:11:22.592165 18653 certs.go:386] copying /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/apiserver.key.b74a7a8c -> /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/apiserver.key
I1123 08:11:22.592214 18653 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/proxy-client.key
I1123 08:11:22.592232 18653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/proxy-client.crt with IP's: []
I1123 08:11:22.774363 18653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/proxy-client.crt ...
I1123 08:11:22.774392 18653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/proxy-client.crt: {Name:mke095bca8a3bbdaedaf5ec07eec71ca6e778658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1123 08:11:22.775053 18653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/proxy-client.key ...
I1123 08:11:22.775069 18653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/proxy-client.key: {Name:mk5bb54bfe5bf2177917ffdfe7c8501a7453f143 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1123 08:11:22.775260 18653 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-14048/.minikube/certs/ca-key.pem (1675 bytes)
I1123 08:11:22.775297 18653 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-14048/.minikube/certs/ca.pem (1082 bytes)
I1123 08:11:22.775324 18653 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-14048/.minikube/certs/cert.pem (1123 bytes)
I1123 08:11:22.775348 18653 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-14048/.minikube/certs/key.pem (1675 bytes)
I1123 08:11:22.775928 18653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1123 08:11:22.808393 18653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1123 08:11:22.839305 18653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1123 08:11:22.869871 18653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1123 08:11:22.899789 18653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
I1123 08:11:22.928927 18653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1123 08:11:22.958482 18653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1123 08:11:22.990572 18653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/profiles/addons-964416/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1123 08:11:23.019517 18653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-14048/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1123 08:11:23.054978 18653 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1123 08:11:23.075796 18653 ssh_runner.go:195] Run: openssl version
I1123 08:11:23.082308 18653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1123 08:11:23.095637 18653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1123 08:11:23.101054 18653 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:11 /usr/share/ca-certificates/minikubeCA.pem
I1123 08:11:23.101114 18653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1123 08:11:23.108731 18653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1123 08:11:23.121598 18653 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1123 08:11:23.126928 18653 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1123 08:11:23.126978 18653 kubeadm.go:401] StartCluster: {Name:addons-964416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-964416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1123 08:11:23.127053 18653 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1123 08:11:23.127101 18653 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1123 08:11:23.162543 18653 cri.go:89] found id: ""
I1123 08:11:23.162603 18653 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1123 08:11:23.174594 18653 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1123 08:11:23.186086 18653 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1123 08:11:23.197490 18653 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1123 08:11:23.197508 18653 kubeadm.go:158] found existing configuration files:
I1123 08:11:23.197551 18653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1123 08:11:23.208166 18653 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1123 08:11:23.208231 18653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1123 08:11:23.219522 18653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1123 08:11:23.230902 18653 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1123 08:11:23.230966 18653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1123 08:11:23.242496 18653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1123 08:11:23.253773 18653 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1123 08:11:23.253830 18653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1123 08:11:23.265493 18653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1123 08:11:23.276712 18653 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1123 08:11:23.276781 18653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1123 08:11:23.288572 18653 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
I1123 08:11:23.436210 18653 kubeadm.go:319] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1123 08:11:35.904565 18653 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
I1123 08:11:35.904632 18653 kubeadm.go:319] [preflight] Running pre-flight checks
I1123 08:11:35.904719 18653 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
I1123 08:11:35.904841 18653 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1123 08:11:35.904925 18653 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1123 08:11:35.904980 18653 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1123 08:11:35.906967 18653 out.go:252] - Generating certificates and keys ...
I1123 08:11:35.907067 18653 kubeadm.go:319] [certs] Using existing ca certificate authority
I1123 08:11:35.907168 18653 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
I1123 08:11:35.907271 18653 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
I1123 08:11:35.907372 18653 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
I1123 08:11:35.907451 18653 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
I1123 08:11:35.907523 18653 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
I1123 08:11:35.907609 18653 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
I1123 08:11:35.907749 18653 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-964416 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
I1123 08:11:35.907825 18653 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
I1123 08:11:35.907938 18653 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-964416 localhost] and IPs [192.168.39.198 127.0.0.1 ::1]
I1123 08:11:35.908010 18653 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
I1123 08:11:35.908073 18653 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
I1123 08:11:35.908117 18653 kubeadm.go:319] [certs] Generating "sa" key and public key
I1123 08:11:35.908170 18653 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1123 08:11:35.908237 18653 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
I1123 08:11:35.908297 18653 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1123 08:11:35.908343 18653 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1123 08:11:35.908409 18653 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1123 08:11:35.908454 18653 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1123 08:11:35.908561 18653 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1123 08:11:35.908647 18653 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1123 08:11:35.909666 18653 out.go:252] - Booting up control plane ...
I1123 08:11:35.909752 18653 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1123 08:11:35.909837 18653 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1123 08:11:35.909926 18653 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1123 08:11:35.910079 18653 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1123 08:11:35.910164 18653 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
I1123 08:11:35.910249 18653 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
I1123 08:11:35.910336 18653 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1123 08:11:35.910384 18653 kubeadm.go:319] [kubelet-start] Starting the kubelet
I1123 08:11:35.910564 18653 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1123 08:11:35.910705 18653 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1123 08:11:35.910755 18653 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001955279s
I1123 08:11:35.910827 18653 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
I1123 08:11:35.910932 18653 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.198:8443/livez
I1123 08:11:35.911004 18653 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
I1123 08:11:35.911070 18653 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
I1123 08:11:35.911143 18653 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.751500362s
I1123 08:11:35.911210 18653 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.401515098s
I1123 08:11:35.911308 18653 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501593201s
I1123 08:11:35.911401 18653 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1123 08:11:35.911546 18653 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I1123 08:11:35.911625 18653 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
I1123 08:11:35.911850 18653 kubeadm.go:319] [mark-control-plane] Marking the node addons-964416 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I1123 08:11:35.911934 18653 kubeadm.go:319] [bootstrap-token] Using token: qbvgpa.gdtv5a1xhu29o3p0
I1123 08:11:35.913781 18653 out.go:252] - Configuring RBAC rules ...
I1123 08:11:35.913871 18653 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I1123 08:11:35.913943 18653 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I1123 08:11:35.914099 18653 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I1123 08:11:35.914273 18653 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I1123 08:11:35.914444 18653 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I1123 08:11:35.914583 18653 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1123 08:11:35.914722 18653 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1123 08:11:35.914791 18653 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
I1123 08:11:35.914863 18653 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
I1123 08:11:35.914871 18653 kubeadm.go:319]
I1123 08:11:35.914964 18653 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
I1123 08:11:35.914978 18653 kubeadm.go:319]
I1123 08:11:35.915061 18653 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
I1123 08:11:35.915070 18653 kubeadm.go:319]
I1123 08:11:35.915104 18653 kubeadm.go:319] mkdir -p $HOME/.kube
I1123 08:11:35.915183 18653 kubeadm.go:319] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I1123 08:11:35.915263 18653 kubeadm.go:319] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I1123 08:11:35.915274 18653 kubeadm.go:319]
I1123 08:11:35.915320 18653 kubeadm.go:319] Alternatively, if you are the root user, you can run:
I1123 08:11:35.915328 18653 kubeadm.go:319]
I1123 08:11:35.915394 18653 kubeadm.go:319] export KUBECONFIG=/etc/kubernetes/admin.conf
I1123 08:11:35.915403 18653 kubeadm.go:319]
I1123 08:11:35.915493 18653 kubeadm.go:319] You should now deploy a pod network to the cluster.
I1123 08:11:35.915618 18653 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I1123 08:11:35.915724 18653 kubeadm.go:319] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I1123 08:11:35.915737 18653 kubeadm.go:319]
I1123 08:11:35.915864 18653 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
I1123 08:11:35.915939 18653 kubeadm.go:319] and service account keys on each node and then running the following as root:
I1123 08:11:35.915945 18653 kubeadm.go:319]
I1123 08:11:35.916021 18653 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token qbvgpa.gdtv5a1xhu29o3p0 \
I1123 08:11:35.916117 18653 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:b6edc1ca7c90bf9718138496669098f2f79ed1548b9ca908b39b661d6f737e61 \
I1123 08:11:35.916142 18653 kubeadm.go:319] --control-plane
I1123 08:11:35.916146 18653 kubeadm.go:319]
I1123 08:11:35.916220 18653 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
I1123 08:11:35.916226 18653 kubeadm.go:319]
I1123 08:11:35.916288 18653 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token qbvgpa.gdtv5a1xhu29o3p0 \
I1123 08:11:35.916392 18653 kubeadm.go:319] --discovery-token-ca-cert-hash sha256:b6edc1ca7c90bf9718138496669098f2f79ed1548b9ca908b39b661d6f737e61
I1123 08:11:35.916403 18653 cni.go:84] Creating CNI manager for ""
I1123 08:11:35.916410 18653 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
I1123 08:11:35.917857 18653 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1123 08:11:35.918962 18653 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1123 08:11:35.932867 18653 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1123 08:11:35.959128 18653 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1123 08:11:35.959232 18653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1123 08:11:35.959246 18653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-964416 minikube.k8s.io/updated_at=2025_11_23T08_11_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=addons-964416 minikube.k8s.io/primary=true
I1123 08:11:36.011606 18653 ops.go:34] apiserver oom_adj: -16
I1123 08:11:36.082160 18653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1123 08:11:36.583160 18653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1123 08:11:37.083151 18653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1123 08:11:37.582209 18653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1123 08:11:38.082605 18653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1123 08:11:38.582790 18653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1123 08:11:39.082308 18653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1123 08:11:39.582221 18653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1123 08:11:40.082598 18653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1123 08:11:40.582415 18653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1123 08:11:41.082790 18653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1123 08:11:41.181687 18653 kubeadm.go:1114] duration metric: took 5.222519084s to wait for elevateKubeSystemPrivileges
I1123 08:11:41.181732 18653 kubeadm.go:403] duration metric: took 18.054758087s to StartCluster
I1123 08:11:41.181756 18653 settings.go:142] acquiring lock: {Name:mkab6903339ca646213aa209a9d09b91734329a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1123 08:11:41.181918 18653 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/21969-14048/kubeconfig
I1123 08:11:41.182457 18653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-14048/kubeconfig: {Name:mk15e2740703c77f3808fd0888f2d0465004dca1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1123 08:11:41.182725 18653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1123 08:11:41.182746 18653 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.198 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
I1123 08:11:41.182816 18653 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
I1123 08:11:41.182933 18653 addons.go:70] Setting yakd=true in profile "addons-964416"
I1123 08:11:41.182948 18653 addons.go:70] Setting inspektor-gadget=true in profile "addons-964416"
I1123 08:11:41.182959 18653 addons.go:239] Setting addon yakd=true in "addons-964416"
I1123 08:11:41.182960 18653 addons.go:239] Setting addon inspektor-gadget=true in "addons-964416"
I1123 08:11:41.182986 18653 host.go:66] Checking if "addons-964416" exists ...
I1123 08:11:41.182991 18653 addons.go:70] Setting cloud-spanner=true in profile "addons-964416"
I1123 08:11:41.183016 18653 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-964416"
I1123 08:11:41.183026 18653 addons.go:70] Setting volcano=true in profile "addons-964416"
I1123 08:11:41.183029 18653 addons.go:239] Setting addon cloud-spanner=true in "addons-964416"
I1123 08:11:41.183035 18653 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-964416"
I1123 08:11:41.183045 18653 addons.go:70] Setting volumesnapshots=true in profile "addons-964416"
I1123 08:11:41.183051 18653 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-964416"
I1123 08:11:41.183057 18653 addons.go:239] Setting addon volumesnapshots=true in "addons-964416"
I1123 08:11:41.183064 18653 host.go:66] Checking if "addons-964416" exists ...
I1123 08:11:41.183073 18653 host.go:66] Checking if "addons-964416" exists ...
I1123 08:11:41.183075 18653 host.go:66] Checking if "addons-964416" exists ...
I1123 08:11:41.183074 18653 addons.go:70] Setting registry=true in profile "addons-964416"
I1123 08:11:41.183089 18653 addons.go:239] Setting addon registry=true in "addons-964416"
I1123 08:11:41.183119 18653 host.go:66] Checking if "addons-964416" exists ...
I1123 08:11:41.183251 18653 addons.go:70] Setting ingress=true in profile "addons-964416"
I1123 08:11:41.183276 18653 addons.go:239] Setting addon ingress=true in "addons-964416"
I1123 08:11:41.183310 18653 host.go:66] Checking if "addons-964416" exists ...
I1123 08:11:41.183598 18653 addons.go:70] Setting gcp-auth=true in profile "addons-964416"
I1123 08:11:41.183623 18653 mustload.go:66] Loading cluster: addons-964416
I1123 08:11:41.183781 18653 config.go:182] Loaded profile config "addons-964416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:11:41.183824 18653 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-964416"
I1123 08:11:41.183845 18653 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-964416"
I1123 08:11:41.183867 18653 host.go:66] Checking if "addons-964416" exists ...
I1123 08:11:41.183930 18653 addons.go:70] Setting ingress-dns=true in profile "addons-964416"
I1123 08:11:41.183949 18653 addons.go:239] Setting addon ingress-dns=true in "addons-964416"
I1123 08:11:41.183974 18653 host.go:66] Checking if "addons-964416" exists ...
I1123 08:11:41.183017 18653 addons.go:70] Setting metrics-server=true in profile "addons-964416"
I1123 08:11:41.184127 18653 addons.go:239] Setting addon metrics-server=true in "addons-964416"
I1123 08:11:41.183037 18653 addons.go:239] Setting addon volcano=true in "addons-964416"
I1123 08:11:41.184227 18653 host.go:66] Checking if "addons-964416" exists ...
I1123 08:11:41.184242 18653 host.go:66] Checking if "addons-964416" exists ...
I1123 08:11:41.183000 18653 host.go:66] Checking if "addons-964416" exists ...
I1123 08:11:41.182993 18653 addons.go:70] Setting default-storageclass=true in profile "addons-964416"
I1123 08:11:41.184844 18653 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-964416"
I1123 08:11:41.183065 18653 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-964416"
I1123 08:11:41.185088 18653 host.go:66] Checking if "addons-964416" exists ...
I1123 08:11:41.182936 18653 config.go:182] Loaded profile config "addons-964416": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:11:41.183009 18653 addons.go:70] Setting storage-provisioner=true in profile "addons-964416"
I1123 08:11:41.183019 18653 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-964416"
I1123 08:11:41.185258 18653 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-964416"
I1123 08:11:41.185237 18653 addons.go:239] Setting addon storage-provisioner=true in "addons-964416"
I1123 08:11:41.185378 18653 host.go:66] Checking if "addons-964416" exists ...
I1123 08:11:41.182998 18653 addons.go:70] Setting registry-creds=true in profile "addons-964416"
I1123 08:11:41.185497 18653 addons.go:239] Setting addon registry-creds=true in "addons-964416"
I1123 08:11:41.185524 18653 host.go:66] Checking if "addons-964416" exists ...
I1123 08:11:41.186205 18653 out.go:179] * Verifying Kubernetes components...
I1123 08:11:41.187859 18653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1123 08:11:41.190339 18653 host.go:66] Checking if "addons-964416" exists ...
W1123 08:11:41.191938 18653 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
I1123 08:11:41.192936 18653 out.go:179] - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
I1123 08:11:41.192997 18653 out.go:179] - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
I1123 08:11:41.193019 18653 out.go:179] - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
I1123 08:11:41.193435 18653 addons.go:239] Setting addon default-storageclass=true in "addons-964416"
I1123 08:11:41.193763 18653 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-964416"
I1123 08:11:41.193785 18653 host.go:66] Checking if "addons-964416" exists ...
I1123 08:11:41.193793 18653 host.go:66] Checking if "addons-964416" exists ...
I1123 08:11:41.193185 18653 out.go:179] - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
I1123 08:11:41.193025 18653 out.go:179] - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
I1123 08:11:41.193816 18653 out.go:179] - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
I1123 08:11:41.193826 18653 out.go:179] - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
I1123 08:11:41.193843 18653 out.go:179] - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
I1123 08:11:41.193848 18653 out.go:179] - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
I1123 08:11:41.193853 18653 out.go:179] - Using image docker.io/marcnuri/yakd:0.0.5
I1123 08:11:41.194556 18653 out.go:179] - Using image docker.io/upmcenterprises/registry-creds:1.10
I1123 08:11:41.194579 18653 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
I1123 08:11:41.195028 18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
I1123 08:11:41.194583 18653 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
I1123 08:11:41.195097 18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
I1123 08:11:41.195717 18653 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1123 08:11:41.195742 18653 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1123 08:11:41.195945 18653 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
I1123 08:11:41.195962 18653 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
I1123 08:11:41.196055 18653 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1123 08:11:41.196059 18653 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1123 08:11:41.196069 18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
I1123 08:11:41.196071 18653 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
I1123 08:11:41.196074 18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
I1123 08:11:41.196078 18653 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
I1123 08:11:41.196097 18653 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
I1123 08:11:41.196107 18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
I1123 08:11:41.196120 18653 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1123 08:11:41.196021 18653 out.go:179] - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
I1123 08:11:41.196122 18653 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
I1123 08:11:41.196142 18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
I1123 08:11:41.196187 18653 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1123 08:11:41.196195 18653 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1123 08:11:41.197380 18653 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
I1123 08:11:41.198006 18653 out.go:179] - Using image docker.io/registry:3.0.0
I1123 08:11:41.198032 18653 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1123 08:11:41.198043 18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1123 08:11:41.198009 18653 out.go:179] - Using image docker.io/rancher/local-path-provisioner:v0.0.22
I1123 08:11:41.198824 18653 out.go:179] - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
I1123 08:11:41.199704 18653 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
I1123 08:11:41.199718 18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
I1123 08:11:41.200272 18653 out.go:179] - Using image docker.io/busybox:stable
I1123 08:11:41.200325 18653 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
I1123 08:11:41.201378 18653 out.go:179] - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
I1123 08:11:41.201447 18653 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1123 08:11:41.201458 18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
I1123 08:11:41.201595 18653 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
I1123 08:11:41.201612 18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
I1123 08:11:41.203637 18653 out.go:179] - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
I1123 08:11:41.204685 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.205026 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.205977 18653 out.go:179] - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
I1123 08:11:41.206214 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:41.206253 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.206290 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.207066 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:41.207104 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.207386 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.207787 18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
I1123 08:11:41.208127 18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
I1123 08:11:41.208720 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.208760 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:41.208788 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.208924 18653 out.go:179] - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
I1123 08:11:41.208963 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:41.209032 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.209221 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.209485 18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
I1123 08:11:41.209702 18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
I1123 08:11:41.210022 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.210174 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:41.210212 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.210309 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.210335 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.210497 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.210775 18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
I1123 08:11:41.210779 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:41.210828 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.210848 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.211091 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:41.211120 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.211290 18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
I1123 08:11:41.211631 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:41.211681 18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
I1123 08:11:41.211703 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.211767 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:41.211800 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.211927 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:41.211976 18653 out.go:179] - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
I1123 08:11:41.212017 18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
I1123 08:11:41.212090 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:41.212122 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.212129 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.212169 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.212405 18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
I1123 08:11:41.212611 18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
I1123 08:11:41.212726 18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
I1123 08:11:41.212904 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.213135 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.213142 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:41.213170 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.213400 18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
I1123 08:11:41.213445 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:41.213490 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.213675 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:41.213686 18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
I1123 08:11:41.213701 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.213844 18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
I1123 08:11:41.214513 18653 out.go:179] - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
I1123 08:11:41.215873 18653 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
I1123 08:11:41.215894 18653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
I1123 08:11:41.218528 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.218915 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:41.218945 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:41.219124 18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
I1123 08:11:42.171767 18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
I1123 08:11:42.173863 18653 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
I1123 08:11:42.173881 18653 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
I1123 08:11:42.174751 18653 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
I1123 08:11:42.174765 18653 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
I1123 08:11:42.178885 18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
I1123 08:11:42.203558 18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
I1123 08:11:42.208448 18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
I1123 08:11:42.293753 18653 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
I1123 08:11:42.293808 18653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
I1123 08:11:42.296069 18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1123 08:11:42.302164 18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1123 08:11:42.311267 18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
I1123 08:11:42.347681 18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
I1123 08:11:42.362584 18653 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1123 08:11:42.362602 18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
I1123 08:11:42.379957 18653 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
I1123 08:11:42.379977 18653 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
I1123 08:11:42.391990 18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
I1123 08:11:42.440101 18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
I1123 08:11:42.527359 18653 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
I1123 08:11:42.527383 18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
I1123 08:11:42.638012 18653 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
I1123 08:11:42.638033 18653 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
I1123 08:11:42.866580 18653 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
I1123 08:11:42.866609 18653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
I1123 08:11:42.875338 18653 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.692581221s)
I1123 08:11:42.875393 18653 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.68751022s)
I1123 08:11:42.875454 18653 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1123 08:11:42.875522 18653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1123 08:11:43.103109 18653 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1123 08:11:43.103132 18653 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1123 08:11:43.135364 18653 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
I1123 08:11:43.135391 18653 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
I1123 08:11:43.151890 18653 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
I1123 08:11:43.151911 18653 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
I1123 08:11:43.160393 18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
I1123 08:11:43.404985 18653 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
I1123 08:11:43.405010 18653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
I1123 08:11:43.489176 18653 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
I1123 08:11:43.489197 18653 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
I1123 08:11:43.532283 18653 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1123 08:11:43.532309 18653 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1123 08:11:43.564321 18653 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
I1123 08:11:43.564854 18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
I1123 08:11:43.772847 18653 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
I1123 08:11:43.772887 18653 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
I1123 08:11:43.817650 18653 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
I1123 08:11:43.817675 18653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
I1123 08:11:43.904825 18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1123 08:11:43.946754 18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
I1123 08:11:44.140771 18653 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
I1123 08:11:44.140809 18653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
I1123 08:11:44.180843 18653 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1123 08:11:44.180865 18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
I1123 08:11:44.505533 18653 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
I1123 08:11:44.505556 18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
I1123 08:11:44.732417 18653 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
I1123 08:11:44.732443 18653 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
I1123 08:11:44.804597 18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1123 08:11:45.227649 18653 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
I1123 08:11:45.227673 18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
I1123 08:11:45.545435 18653 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
I1123 08:11:45.545455 18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
I1123 08:11:46.086124 18653 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1123 08:11:46.086158 18653 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
I1123 08:11:46.673237 18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
I1123 08:11:47.655655 18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.483848839s)
I1123 08:11:47.655718 18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.476811802s)
I1123 08:11:47.655762 18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (5.452186594s)
I1123 08:11:47.655836 18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (5.447370266s)
I1123 08:11:48.115611 18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.813420647s)
I1123 08:11:48.115733 18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.804431054s)
I1123 08:11:48.115799 18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.768082025s)
I1123 08:11:48.116056 18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.81996011s)
I1123 08:11:48.640854 18653 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
I1123 08:11:48.643519 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:48.643875 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:48.643896 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:48.644048 18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
I1123 08:11:48.854435 18653 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
I1123 08:11:48.924433 18653 addons.go:239] Setting addon gcp-auth=true in "addons-964416"
I1123 08:11:48.924488 18653 host.go:66] Checking if "addons-964416" exists ...
I1123 08:11:48.926175 18653 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
I1123 08:11:48.928235 18653 main.go:143] libmachine: domain addons-964416 has defined MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:48.928587 18653 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:75:8f", ip: ""} in network mk-addons-964416: {Iface:virbr1 ExpiryTime:2025-11-23 09:11:13 +0000 UTC Type:0 Mac:52:54:00:e8:75:8f Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:addons-964416 Clientid:01:52:54:00:e8:75:8f}
I1123 08:11:48.928608 18653 main.go:143] libmachine: domain addons-964416 has defined IP address 192.168.39.198 and MAC address 52:54:00:e8:75:8f in network mk-addons-964416
I1123 08:11:48.928737 18653 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21969-14048/.minikube/machines/addons-964416/id_rsa Username:docker}
I1123 08:11:50.270328 18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.878309434s)
I1123 08:11:50.270357 18653 addons.go:495] Verifying addon ingress=true in "addons-964416"
I1123 08:11:50.270478 18653 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.39.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.394914797s)
I1123 08:11:50.270508 18653 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.395016637s)
I1123 08:11:50.270510 18653 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
I1123 08:11:50.270573 18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.110151486s)
I1123 08:11:50.270417 18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.830283949s)
I1123 08:11:50.270605 18653 addons.go:495] Verifying addon registry=true in "addons-964416"
I1123 08:11:50.270722 18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.323930377s)
I1123 08:11:50.270666 18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.365806733s)
I1123 08:11:50.271647 18653 addons.go:495] Verifying addon metrics-server=true in "addons-964416"
I1123 08:11:50.271248 18653 node_ready.go:35] waiting up to 6m0s for node "addons-964416" to be "Ready" ...
I1123 08:11:50.272038 18653 out.go:179] * Verifying ingress addon...
I1123 08:11:50.272049 18653 out.go:179] * Verifying registry addon...
I1123 08:11:50.272921 18653 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
minikube -p addons-964416 service yakd-dashboard -n yakd-dashboard
I1123 08:11:50.274526 18653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
I1123 08:11:50.274575 18653 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
I1123 08:11:50.327736 18653 node_ready.go:49] node "addons-964416" is "Ready"
I1123 08:11:50.327771 18653 node_ready.go:38] duration metric: took 56.109166ms for node "addons-964416" to be "Ready" ...
I1123 08:11:50.327786 18653 api_server.go:52] waiting for apiserver process to appear ...
I1123 08:11:50.327846 18653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1123 08:11:50.327962 18653 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
I1123 08:11:50.327981 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:11:50.327998 18653 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
I1123 08:11:50.328011 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:11:50.807786 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:11:50.846655 18653 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-964416" context rescaled to 1 replicas
I1123 08:11:50.846665 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:11:51.379871 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:11:51.384977 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:11:51.745150 18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.940509974s)
W1123 08:11:51.745200 18653 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1123 08:11:51.745219 18653 retry.go:31] will retry after 179.005398ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
stdout:
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created
stderr:
error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
ensure CRDs are installed first
I1123 08:11:51.745347 18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.072075146s)
I1123 08:11:51.745384 18653 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.81918681s)
I1123 08:11:51.745420 18653 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.41755789s)
I1123 08:11:51.745442 18653 api_server.go:72] duration metric: took 10.562664233s to wait for apiserver process to appear ...
I1123 08:11:51.745453 18653 api_server.go:88] waiting for apiserver healthz status ...
I1123 08:11:51.745558 18653 api_server.go:253] Checking apiserver healthz at https://192.168.39.198:8443/healthz ...
I1123 08:11:51.745386 18653 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-964416"
I1123 08:11:51.747292 18653 out.go:179] * Verifying csi-hostpath-driver addon...
I1123 08:11:51.747320 18653 out.go:179] - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
I1123 08:11:51.748624 18653 out.go:179] - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
I1123 08:11:51.749054 18653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1123 08:11:51.749747 18653 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
I1123 08:11:51.749766 18653 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
I1123 08:11:51.794042 18653 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
I1123 08:11:51.794074 18653 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
I1123 08:11:51.807186 18653 api_server.go:279] https://192.168.39.198:8443/healthz returned 200:
ok
I1123 08:11:51.831599 18653 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1123 08:11:51.831621 18653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
I1123 08:11:51.844490 18653 api_server.go:141] control plane version: v1.34.1
I1123 08:11:51.844523 18653 api_server.go:131] duration metric: took 98.989325ms to wait for apiserver health ...
I1123 08:11:51.844532 18653 system_pods.go:43] waiting for kube-system pods to appear ...
I1123 08:11:51.858034 18653 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1123 08:11:51.858052 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:11:51.858277 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:11:51.858291 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:11:51.871224 18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
I1123 08:11:51.924365 18653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
I1123 08:11:51.928546 18653 system_pods.go:59] 20 kube-system pods found
I1123 08:11:51.928573 18653 system_pods.go:61] "amd-gpu-device-plugin-8vc9q" [8295884f-da88-49f2-9084-a9c8cfc1e4d9] Running
I1123 08:11:51.928582 18653 system_pods.go:61] "coredns-66bc5c9577-69dqf" [34c766d9-fd50-4a3f-808a-a98aa625e61c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1123 08:11:51.928589 18653 system_pods.go:61] "coredns-66bc5c9577-gxw2m" [4c7ecbdf-e8c7-4ff9-9c2d-dc54c953605f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1123 08:11:51.928595 18653 system_pods.go:61] "csi-hostpath-attacher-0" [c5f1ff48-f68e-422e-83e7-eacc4a9dd794] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1123 08:11:51.928600 18653 system_pods.go:61] "csi-hostpath-resizer-0" [7ad454a6-cccb-4992-90db-67818e21d079] Pending
I1123 08:11:51.928607 18653 system_pods.go:61] "csi-hostpathplugin-vns9g" [28997eb1-283e-4d60-943c-9e31386ebc08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1123 08:11:51.928611 18653 system_pods.go:61] "etcd-addons-964416" [fdb49f48-84aa-4799-b97e-21ce92b79ddc] Running
I1123 08:11:51.928614 18653 system_pods.go:61] "kube-apiserver-addons-964416" [75d01842-c68b-4a49-847c-58fbcf148fba] Running
I1123 08:11:51.928618 18653 system_pods.go:61] "kube-controller-manager-addons-964416" [36599b1c-1da7-4b89-b7e7-baac03480cd7] Running
I1123 08:11:51.928623 18653 system_pods.go:61] "kube-ingress-dns-minikube" [bc33e34a-7ac6-484c-b0a7-430085041ff4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1123 08:11:51.928626 18653 system_pods.go:61] "kube-proxy-cp69g" [3b6331ff-3dfb-46c8-b853-3ac13fdd22cc] Running
I1123 08:11:51.928629 18653 system_pods.go:61] "kube-scheduler-addons-964416" [d5865b9d-d76a-46fe-ad59-9db3f56a22ac] Running
I1123 08:11:51.928636 18653 system_pods.go:61] "metrics-server-85b7d694d7-bbw4l" [ca8af767-0eca-442a-abca-2fdfda492b61] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1123 08:11:51.928645 18653 system_pods.go:61] "nvidia-device-plugin-daemonset-n75x9" [8710964c-97c8-402e-9549-f6b1f4591c57] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1123 08:11:51.928651 18653 system_pods.go:61] "registry-6b586f9694-tgrtb" [462f4f44-75d7-422b-bb9c-ceb8be37562e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1123 08:11:51.928655 18653 system_pods.go:61] "registry-creds-764b6fb674-nrpjq" [b186300a-b391-46c2-8eee-26bb8cada6ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1123 08:11:51.928662 18653 system_pods.go:61] "registry-proxy-sn2cr" [aeb28b9e-fe74-4f9c-99cb-c02c966c626d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1123 08:11:51.928667 18653 system_pods.go:61] "snapshot-controller-7d9fbc56b8-d275s" [dc674dd5-6a4d-49d2-8119-79fa3fcc63ef] Pending
I1123 08:11:51.928671 18653 system_pods.go:61] "snapshot-controller-7d9fbc56b8-xsdh6" [6c82d59c-4d16-497b-8fa3-7184384d1ee5] Pending
I1123 08:11:51.928675 18653 system_pods.go:61] "storage-provisioner" [fafc19a5-6c67-4faa-af77-b5dc63837928] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1123 08:11:51.928681 18653 system_pods.go:74] duration metric: took 84.143729ms to wait for pod list to return data ...
I1123 08:11:51.928693 18653 default_sa.go:34] waiting for default service account to be created ...
I1123 08:11:51.975940 18653 default_sa.go:45] found service account: "default"
I1123 08:11:51.975966 18653 default_sa.go:55] duration metric: took 47.266268ms for default service account to be created ...
I1123 08:11:51.975979 18653 system_pods.go:116] waiting for k8s-apps to be running ...
I1123 08:11:52.012876 18653 system_pods.go:86] 20 kube-system pods found
I1123 08:11:52.012914 18653 system_pods.go:89] "amd-gpu-device-plugin-8vc9q" [8295884f-da88-49f2-9084-a9c8cfc1e4d9] Running
I1123 08:11:52.012929 18653 system_pods.go:89] "coredns-66bc5c9577-69dqf" [34c766d9-fd50-4a3f-808a-a98aa625e61c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1123 08:11:52.012940 18653 system_pods.go:89] "coredns-66bc5c9577-gxw2m" [4c7ecbdf-e8c7-4ff9-9c2d-dc54c953605f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1123 08:11:52.012949 18653 system_pods.go:89] "csi-hostpath-attacher-0" [c5f1ff48-f68e-422e-83e7-eacc4a9dd794] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
I1123 08:11:52.012959 18653 system_pods.go:89] "csi-hostpath-resizer-0" [7ad454a6-cccb-4992-90db-67818e21d079] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
I1123 08:11:52.012975 18653 system_pods.go:89] "csi-hostpathplugin-vns9g" [28997eb1-283e-4d60-943c-9e31386ebc08] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
I1123 08:11:52.012986 18653 system_pods.go:89] "etcd-addons-964416" [fdb49f48-84aa-4799-b97e-21ce92b79ddc] Running
I1123 08:11:52.012993 18653 system_pods.go:89] "kube-apiserver-addons-964416" [75d01842-c68b-4a49-847c-58fbcf148fba] Running
I1123 08:11:52.012998 18653 system_pods.go:89] "kube-controller-manager-addons-964416" [36599b1c-1da7-4b89-b7e7-baac03480cd7] Running
I1123 08:11:52.013009 18653 system_pods.go:89] "kube-ingress-dns-minikube" [bc33e34a-7ac6-484c-b0a7-430085041ff4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
I1123 08:11:52.013016 18653 system_pods.go:89] "kube-proxy-cp69g" [3b6331ff-3dfb-46c8-b853-3ac13fdd22cc] Running
I1123 08:11:52.013024 18653 system_pods.go:89] "kube-scheduler-addons-964416" [d5865b9d-d76a-46fe-ad59-9db3f56a22ac] Running
I1123 08:11:52.013033 18653 system_pods.go:89] "metrics-server-85b7d694d7-bbw4l" [ca8af767-0eca-442a-abca-2fdfda492b61] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
I1123 08:11:52.013048 18653 system_pods.go:89] "nvidia-device-plugin-daemonset-n75x9" [8710964c-97c8-402e-9549-f6b1f4591c57] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
I1123 08:11:52.013058 18653 system_pods.go:89] "registry-6b586f9694-tgrtb" [462f4f44-75d7-422b-bb9c-ceb8be37562e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
I1123 08:11:52.013067 18653 system_pods.go:89] "registry-creds-764b6fb674-nrpjq" [b186300a-b391-46c2-8eee-26bb8cada6ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
I1123 08:11:52.013080 18653 system_pods.go:89] "registry-proxy-sn2cr" [aeb28b9e-fe74-4f9c-99cb-c02c966c626d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
I1123 08:11:52.013089 18653 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d275s" [dc674dd5-6a4d-49d2-8119-79fa3fcc63ef] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
I1123 08:11:52.013098 18653 system_pods.go:89] "snapshot-controller-7d9fbc56b8-xsdh6" [6c82d59c-4d16-497b-8fa3-7184384d1ee5] Pending
I1123 08:11:52.013108 18653 system_pods.go:89] "storage-provisioner" [fafc19a5-6c67-4faa-af77-b5dc63837928] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1123 08:11:52.013119 18653 system_pods.go:126] duration metric: took 37.132161ms to wait for k8s-apps to be running ...
I1123 08:11:52.013135 18653 system_svc.go:44] waiting for kubelet service to be running ....
I1123 08:11:52.013190 18653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1123 08:11:52.263320 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:11:52.286656 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:11:52.287127 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:11:52.761198 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:11:52.779283 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:11:52.787667 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:11:53.286648 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:11:53.304290 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:11:53.304505 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:11:53.559194 18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.687935904s)
I1123 08:11:53.560118 18653 addons.go:495] Verifying addon gcp-auth=true in "addons-964416"
I1123 08:11:53.561532 18653 out.go:179] * Verifying gcp-auth addon...
I1123 08:11:53.563773 18653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
I1123 08:11:53.655360 18653 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
I1123 08:11:53.655395 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:11:53.760533 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:11:53.806151 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:11:53.809686 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:11:54.072280 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:11:54.256052 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:11:54.281078 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:11:54.283869 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:11:54.329971 18653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.405562218s)
I1123 08:11:54.330011 18653 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.316794277s)
I1123 08:11:54.330041 18653 system_svc.go:56] duration metric: took 2.316904612s WaitForService to wait for kubelet
I1123 08:11:54.330058 18653 kubeadm.go:587] duration metric: took 13.147278847s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1123 08:11:54.330084 18653 node_conditions.go:102] verifying NodePressure condition ...
I1123 08:11:54.336149 18653 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1123 08:11:54.336178 18653 node_conditions.go:123] node cpu capacity is 2
I1123 08:11:54.336195 18653 node_conditions.go:105] duration metric: took 6.103954ms to run NodePressure ...
I1123 08:11:54.336211 18653 start.go:242] waiting for startup goroutines ...
I1123 08:11:54.572797 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:11:54.753927 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:11:54.781211 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:11:54.783690 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:11:55.069363 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:11:55.253431 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:11:55.278385 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:11:55.278513 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:11:55.579948 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:11:55.756953 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:11:55.779362 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:11:55.781401 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:11:56.070619 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:11:56.254558 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:11:56.280320 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:11:56.280661 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:11:56.567107 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:11:56.753024 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:11:56.779759 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:11:56.779974 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:11:57.070759 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:11:57.254455 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:11:57.281990 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:11:57.283434 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:11:57.568587 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:11:57.910111 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:11:57.910562 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:11:57.910694 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:11:58.069379 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:11:58.256492 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:11:58.358458 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:11:58.358573 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:11:58.567560 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:11:58.753765 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:11:58.777725 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:11:58.778898 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:11:59.067376 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:11:59.252786 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:11:59.277254 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:11:59.278642 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:11:59.567603 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:11:59.755559 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:11:59.780249 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:11:59.780895 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:00.068075 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:00.254225 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:00.281022 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:00.281189 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:00.570229 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:00.755070 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:00.781948 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:00.781987 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:01.069266 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:01.253146 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:01.284937 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:01.285198 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:01.568504 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:01.753174 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:01.785159 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:01.787095 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:02.067558 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:02.256646 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:02.280522 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:02.280910 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:02.570581 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:02.755539 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:02.780832 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:02.781158 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:03.068220 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:03.253105 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:03.285159 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:03.286043 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:03.567175 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:03.752846 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:03.783867 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:03.807095 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:04.068423 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:04.257031 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:04.278779 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:04.282840 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:04.567975 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:04.754897 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:04.782147 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:04.782693 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:05.069233 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:05.260665 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:05.279905 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:05.289433 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:05.570285 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:05.756646 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:05.781720 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:05.781992 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:06.084155 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:06.264449 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:06.293177 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:06.303247 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:06.570873 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:06.754892 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:06.780508 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:06.780683 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:07.424197 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:07.424844 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:07.425052 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:07.425167 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:07.574750 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:07.754441 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:07.778739 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:07.780542 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:08.067997 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:08.253123 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:08.279911 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:08.292102 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:08.571763 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:08.827958 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:08.828045 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:08.828281 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:09.068152 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:09.254586 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:09.281784 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:09.283025 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:09.569850 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:09.755672 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:09.781176 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:09.783940 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:10.069431 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:10.254510 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:10.278272 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:10.280765 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:10.569448 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:10.752783 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:10.781020 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:10.783039 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:11.069697 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:11.257120 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:11.282908 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:11.283554 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:11.567554 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:11.755889 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:11.781984 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:11.782143 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:12.068831 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:12.255349 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:12.280781 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:12.282573 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:12.570175 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:12.754125 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:12.780882 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:12.782813 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:13.357240 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:13.357376 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:13.359202 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:13.360870 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:13.567476 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:13.752960 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:13.781866 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:13.781925 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:14.070159 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:14.253422 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:14.278828 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:14.282480 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:14.570669 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:14.756654 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:14.781712 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:14.782724 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:15.071770 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:15.255955 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:15.280715 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:15.280768 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:15.575613 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:15.753256 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:15.780348 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:15.782534 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:16.195673 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:16.254363 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:16.283233 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:16.286979 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:16.568822 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:16.755006 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:16.782293 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:16.782344 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:17.070193 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:17.269826 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:17.279534 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:17.280056 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:17.660266 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:17.757884 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:17.778228 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:17.779089 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:18.072628 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:18.260409 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:18.360004 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:18.360128 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:18.568163 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:18.753421 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:18.778449 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:18.778841 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:19.067800 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:19.253358 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:19.279456 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:19.280667 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:19.566737 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:19.755176 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:19.778006 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:19.779789 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:20.067831 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:20.252974 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:20.278361 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:20.278763 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:20.569428 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:20.755419 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:20.779710 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:20.781806 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:21.069790 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:21.253862 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:21.277985 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:21.280045 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:21.567035 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:21.754960 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:21.777925 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:21.785371 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:22.069128 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:22.254717 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:22.278782 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:22.280675 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:22.570213 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:22.754963 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:22.778695 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:22.778795 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:23.069658 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:23.254508 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:23.283579 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:23.283799 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:23.569152 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:23.756436 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:23.781305 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:23.781435 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:24.070980 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:24.254951 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:24.279736 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:24.281698 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:24.568832 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:24.754164 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:24.779566 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:24.780258 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:25.068423 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:25.254588 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:25.284296 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:25.290666 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:25.700439 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:25.805679 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:25.806219 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:25.807912 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:26.068262 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:26.253383 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:26.277977 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:26.279878 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:26.567594 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:26.753796 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:26.781092 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:26.781092 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:27.070108 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:27.253975 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:27.278073 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:27.279243 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:27.570237 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:27.753821 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:27.783365 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:27.787026 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:28.067578 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:28.255572 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:28.279824 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:28.280149 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:28.567677 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:28.754310 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:28.779259 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:28.780958 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:29.066863 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:29.254048 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:29.285120 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:29.285833 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:29.568364 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:29.755484 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:29.779376 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:29.779538 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
I1123 08:12:30.071015 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:30.253664 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:30.277766 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:30.279795 18653 kapi.go:107] duration metric: took 40.005270475s to wait for kubernetes.io/minikube-addons=registry ...
I1123 08:12:30.576113 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:30.756913 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:30.854768 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:31.067295 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:31.255589 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:31.278369 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:31.568752 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:31.758009 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:31.778264 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:32.109874 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:32.256360 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:32.280770 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:32.568756 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:32.758508 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:32.785131 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:33.067779 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:33.266493 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:33.282380 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:33.572134 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:33.754594 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:33.780988 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:34.067361 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:34.258381 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:34.278973 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:34.572513 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:34.752678 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:34.779107 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:35.070501 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:35.253227 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:35.279513 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:35.567749 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:35.757209 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:35.779942 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:36.067082 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:36.254246 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:36.288291 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:36.570290 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:36.753238 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:36.778775 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:37.066612 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:37.259263 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:37.280935 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:37.571914 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:37.759960 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:37.787555 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:38.068349 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:38.254225 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:38.281249 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:38.570949 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:38.753755 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:38.778834 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:39.076515 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:39.268013 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:39.285310 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:39.571174 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:39.753712 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:39.778850 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:40.070905 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:40.255364 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:40.355885 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:40.571119 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:40.756323 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:40.781918 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:41.070335 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:41.258367 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:41.281058 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:41.568093 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:41.753649 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:41.779312 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:42.068821 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:42.259397 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:42.280168 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:42.569078 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:42.752670 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:42.778879 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:43.067973 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:43.253615 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:43.279829 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:43.568910 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:43.753770 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:43.781796 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:44.070076 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:44.254942 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:44.284232 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:44.569905 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:44.754871 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:44.777797 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:45.067161 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:45.253116 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
I1123 08:12:45.278662 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:45.567912 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:45.753832 18653 kapi.go:107] duration metric: took 54.00477427s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
I1123 08:12:45.777813 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:46.068057 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:46.278906 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:46.566894 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:46.779172 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:47.067483 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:47.277741 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:47.569623 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:47.777867 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:48.067928 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:48.278481 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:48.567924 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:48.778781 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:49.067114 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:49.278638 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:49.568038 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:49.778503 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:50.068100 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:50.279750 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:50.567370 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:50.778867 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:51.069541 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:51.278576 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:51.568394 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:51.778581 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:52.068285 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:52.279251 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:52.568904 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:52.778873 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:53.067799 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:53.278362 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:53.567588 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:53.778347 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:54.068107 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:54.279910 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:54.567049 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:54.781487 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:55.070098 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:55.282623 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:55.570851 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:55.781131 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:56.067767 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:56.280511 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:56.568891 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:56.778092 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:57.068642 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:57.277565 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:57.570879 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:57.778630 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:58.069634 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:58.280608 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:58.570710 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:58.778657 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:59.075718 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:59.278372 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:12:59.570415 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:12:59.782091 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:13:00.068015 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:13:00.278425 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:13:00.570968 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:13:00.783037 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:13:01.202882 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:13:01.279683 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:13:01.568656 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:13:01.778862 18653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
I1123 08:13:02.067377 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:13:02.278970 18653 kapi.go:107] duration metric: took 1m12.004393125s to wait for app.kubernetes.io/name=ingress-nginx ...
I1123 08:13:02.569671 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:13:03.071965 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:13:03.568091 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:13:04.069148 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:13:04.571399 18653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
I1123 08:13:05.067408 18653 kapi.go:107] duration metric: took 1m11.50363415s to wait for kubernetes.io/minikube-addons=gcp-auth ...
I1123 08:13:05.068961 18653 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-964416 cluster.
I1123 08:13:05.070210 18653 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
I1123 08:13:05.071483 18653 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
I1123 08:13:05.072830 18653 out.go:179] * Enabled addons: cloud-spanner, amd-gpu-device-plugin, inspektor-gadget, storage-provisioner-rancher, registry-creds, ingress-dns, storage-provisioner, default-storageclass, nvidia-device-plugin, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
I1123 08:13:05.073991 18653 addons.go:530] duration metric: took 1m23.891178936s for enable addons: enabled=[cloud-spanner amd-gpu-device-plugin inspektor-gadget storage-provisioner-rancher registry-creds ingress-dns storage-provisioner default-storageclass nvidia-device-plugin metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
I1123 08:13:05.074043 18653 start.go:247] waiting for cluster config update ...
I1123 08:13:05.074062 18653 start.go:256] writing updated cluster config ...
I1123 08:13:05.074326 18653 ssh_runner.go:195] Run: rm -f paused
I1123 08:13:05.081515 18653 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1123 08:13:05.085540 18653 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gxw2m" in "kube-system" namespace to be "Ready" or be gone ...
I1123 08:13:05.090989 18653 pod_ready.go:94] pod "coredns-66bc5c9577-gxw2m" is "Ready"
I1123 08:13:05.091008 18653 pod_ready.go:86] duration metric: took 5.450504ms for pod "coredns-66bc5c9577-gxw2m" in "kube-system" namespace to be "Ready" or be gone ...
I1123 08:13:05.093678 18653 pod_ready.go:83] waiting for pod "etcd-addons-964416" in "kube-system" namespace to be "Ready" or be gone ...
I1123 08:13:05.098199 18653 pod_ready.go:94] pod "etcd-addons-964416" is "Ready"
I1123 08:13:05.098219 18653 pod_ready.go:86] duration metric: took 4.519474ms for pod "etcd-addons-964416" in "kube-system" namespace to be "Ready" or be gone ...
I1123 08:13:05.100500 18653 pod_ready.go:83] waiting for pod "kube-apiserver-addons-964416" in "kube-system" namespace to be "Ready" or be gone ...
I1123 08:13:05.106213 18653 pod_ready.go:94] pod "kube-apiserver-addons-964416" is "Ready"
I1123 08:13:05.106236 18653 pod_ready.go:86] duration metric: took 5.713706ms for pod "kube-apiserver-addons-964416" in "kube-system" namespace to be "Ready" or be gone ...
I1123 08:13:05.108546 18653 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-964416" in "kube-system" namespace to be "Ready" or be gone ...
I1123 08:13:05.485714 18653 pod_ready.go:94] pod "kube-controller-manager-addons-964416" is "Ready"
I1123 08:13:05.485750 18653 pod_ready.go:86] duration metric: took 377.186648ms for pod "kube-controller-manager-addons-964416" in "kube-system" namespace to be "Ready" or be gone ...
I1123 08:13:05.690238 18653 pod_ready.go:83] waiting for pod "kube-proxy-cp69g" in "kube-system" namespace to be "Ready" or be gone ...
I1123 08:13:06.085885 18653 pod_ready.go:94] pod "kube-proxy-cp69g" is "Ready"
I1123 08:13:06.085907 18653 pod_ready.go:86] duration metric: took 395.638395ms for pod "kube-proxy-cp69g" in "kube-system" namespace to be "Ready" or be gone ...
I1123 08:13:06.288019 18653 pod_ready.go:83] waiting for pod "kube-scheduler-addons-964416" in "kube-system" namespace to be "Ready" or be gone ...
I1123 08:13:06.685739 18653 pod_ready.go:94] pod "kube-scheduler-addons-964416" is "Ready"
I1123 08:13:06.685774 18653 pod_ready.go:86] duration metric: took 397.732698ms for pod "kube-scheduler-addons-964416" in "kube-system" namespace to be "Ready" or be gone ...
I1123 08:13:06.685791 18653 pod_ready.go:40] duration metric: took 1.604246897s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1123 08:13:06.730388 18653 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
I1123 08:13:06.731934 18653 out.go:179] * Done! kubectl is now configured to use "addons-964416" cluster and "default" namespace by default
==> CRI-O <==
Nov 23 08:16:11 addons-964416 conmon[12551]: conmon d5fc58698cf6b99dd082 <ndebug>: addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/12/attach}
Nov 23 08:16:11 addons-964416 conmon[12551]: conmon d5fc58698cf6b99dd082 <ndebug>: terminal_ctrl_fd: 12
Nov 23 08:16:11 addons-964416 conmon[12551]: conmon d5fc58698cf6b99dd082 <ndebug>: winsz read side: 16, winsz write side: 17
Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.152858037Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6772d6f2-b4d4-4de3-a225-5cfcf2927bc7 name=/runtime.v1.RuntimeService/Version
Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.152923754Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6772d6f2-b4d4-4de3-a225-5cfcf2927bc7 name=/runtime.v1.RuntimeService/Version
Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.154153950Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=39fe12f3-52fb-409c-995c-7c13d8e52369 name=/runtime.v1.ImageService/ImageFsInfo
Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.161487401Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763885771161460598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597172,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=39fe12f3-52fb-409c-995c-7c13d8e52369 name=/runtime.v1.ImageService/ImageFsInfo
Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.162590443Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=27e0c1e9-bad3-4c18-bb4b-36370da2e1c9 name=/runtime.v1.RuntimeService/ListContainers
Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.162894587Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=27e0c1e9-bad3-4c18-bb4b-36370da2e1c9 name=/runtime.v1.RuntimeService/ListContainers
Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.163591195Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:75e1b2efb20e18e880e59f64bc49b3114d53ccde7e613edc5b4615dc84fcd0a9,PodSandboxId:d6bf7a0c9178de5aeded44ed3172fe8a5fa37b1637e181238bae040a3132ac32,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763885628957386781,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16094845-c835-4494-a064-31053be1943b,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a13ad04ce6225dcf840a6cf8802f4ba866d65bb5e34d4bf31ab7e5f17e7b741,PodSandboxId:88d41c059de83448dda19955ce8fb31c3489bea24a5f320b367b9d98641ffebf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763885590231067098,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5d5468e-0e81-49d7-8cef-aec9926db30e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fcfcb8eb46bc8c8ad289f19e3217cfb5ad9dfda4f22775c0a49639411e4285b,PodSandboxId:de8125f44caed4f5ad920cab9a0bb988de7dc2f16c74de14c821e650578a3134,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763885581360265657,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-d2lnn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e3e96657-f191-4984-ad8c-72b0ab056c55,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:e41b23988404444894d83351fae6c2b44e55b7c753a80b8e5fd0a5e0fca26d59,PodSandboxId:adcafe8d23dcc115ab2ac8cc000e1264c58b4386ebd44aae9b79294b3ce1c6ea,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Sta
te:CONTAINER_EXITED,CreatedAt:1763885551420361747,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qjtrl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c548cf97-ddd2-4a1a-919e-311e39bd3833,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d0112d0fad9f107107787fcc4761e8dc0a95d6ec3ab85df4820ac9dcba53be,PodSandboxId:e6ecc919cd54c5979641c50f531570e7f0db93d499967b4365cf666601922407,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970fa
a6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763885550605397523,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-n8xfv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50dea3aa-fc75-4df0-bb04-bd8fd77e7ff9,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de81c25adb70a06d2e34c5fd93db6dbd056e629e8c1a19d19296966543bb3794,PodSandboxId:3fed9ced7aa5025979185f08ffa8128f912cdbf3370def2dca177c87e468cb93,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763885536633541503,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc33e34a-7ac6-484c-b0a7-430085041ff4,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d5ef951538ac924b333da325abf735f2e52434e3c5dab819290dc703c0fa9f,PodSandboxId:17af1c9dedcd0272d4ffcb547936e10b9b74d0c546f2751eb7944aeacf774f79,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f70
9a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763885511479730934,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafc19a5-6c67-4faa-af77-b5dc63837928,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034b673ca1afef8954547ba3b46fd029c5a7e32e9cae3456c825536ee88059e6,PodSandboxId:b4dc96fcea260adda9b8ee394b9b2bb5c3afdf293214bf8627dd585930863e57,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a0
7c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763885509150592446,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-8vc9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8295884f-da88-49f2-9084-a9c8cfc1e4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c085c7e3c7e1d4180e3f556a2b13e400e1a3a39cd49b5d8a82e0e6cbb197ee2,PodSandboxId:4d8a9af25383570f4daeb138a79efa23e5ee969bce155aa8a528afeed7cce39e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a916
7fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763885502664071177,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gxw2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c7ecbdf-e8c7-4ff9-9c2d-dc54c953605f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.te
rminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a32a377fb8fd085baa47ac0065a2a6c9b61233646d15f815186bfb912aaee0,PodSandboxId:e6983ca5f266bea92319da768810baecbeb05b50b53084f38979f587c025a089,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763885501513009089,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cp69g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b6331ff-3dfb-46c8-b853-3ac13fdd22cc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc7808cbafa35ac18cf85d26bfed95c36a01bfae4fee82ff44e13e37accb2fb,PodSandboxId:9b65b771852981bff123c7c64aea210a8b531e3f1a3e167c3fcdf73979a4e982,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763885489786436394,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-964416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa4d6f814c0c0a234c1829d41f9cc06b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f0364c26ba8a2ffe836cbcc6d72ce91fb1532b3629b02515db50a6d4b466dc0,PodSandboxId:fb568a606e43974dcf74554272588ec98d2a159da91a96197ac316a5aba04b2b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763885489807444229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-964416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efc48a5f24208a1a403f153c19e9b10a,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\
":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39699de5a00c064cdca41c90eb8b78538e5879de76016c72552fe5d7db95d87e,PodSandboxId:5c6c286389717dc5b739c64240d1166c11fa677abe126f45c071987cabb0aafa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763885489772029608,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-964416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3709f2b029d1230ca25347545eb530b,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e910ff123e32ce12c666332e542d611040ccdc568a9fc18717d44e9a60184ce,PodSandboxId:c6f20fa3ad6a2efc964c8e924a906253b4e17d98d581838dbe3aeb539efec671,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763885489739229669,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-964416,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: b5bb7c82c50c3697588cc803d0c3e419,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=27e0c1e9-bad3-4c18-bb4b-36370da2e1c9 name=/runtime.v1.RuntimeService/ListContainers
Nov 23 08:16:11 addons-964416 conmon[12551]: conmon d5fc58698cf6b99dd082 <ndebug>: container PID: 12564
Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.185924166Z" level=debug msg="Received container pid: 12564" file="oci/runtime_oci.go:284" id=51f92f68-a295-441c-b4d9-a4d686c1d82f name=/runtime.v1.RuntimeService/CreateContainer
Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.201590134Z" level=info msg="Created container d5fc58698cf6b99dd0822df09b23eec5303d8cc276f2686cec7e7e9451c8b9a7: default/hello-world-app-5d498dc89-4czrb/hello-world-app" file="server/container_create.go:491" id=51f92f68-a295-441c-b4d9-a4d686c1d82f name=/runtime.v1.RuntimeService/CreateContainer
Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.201786552Z" level=debug msg="Response: &CreateContainerResponse{ContainerId:d5fc58698cf6b99dd0822df09b23eec5303d8cc276f2686cec7e7e9451c8b9a7,}" file="otel-collector/interceptors.go:74" id=51f92f68-a295-441c-b4d9-a4d686c1d82f name=/runtime.v1.RuntimeService/CreateContainer
Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.203009299Z" level=debug msg="Request: &StartContainerRequest{ContainerId:d5fc58698cf6b99dd0822df09b23eec5303d8cc276f2686cec7e7e9451c8b9a7,}" file="otel-collector/interceptors.go:62" id=7384383d-3bab-464f-9a28-3c02251e8480 name=/runtime.v1.RuntimeService/StartContainer
Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.203139344Z" level=info msg="Starting container: d5fc58698cf6b99dd0822df09b23eec5303d8cc276f2686cec7e7e9451c8b9a7" file="server/container_start.go:21" id=7384383d-3bab-464f-9a28-3c02251e8480 name=/runtime.v1.RuntimeService/StartContainer
Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.211783459Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b402f1e5-77d2-4003-a9d4-58749b3e19cf name=/runtime.v1.RuntimeService/Version
Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.211910323Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b402f1e5-77d2-4003-a9d4-58749b3e19cf name=/runtime.v1.RuntimeService/Version
Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.214601323Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2af74257-4803-4512-90e4-d0c25522e3f9 name=/runtime.v1.ImageService/ImageFsInfo
Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.216184987Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1763885771216158074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:597172,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2af74257-4803-4512-90e4-d0c25522e3f9 name=/runtime.v1.ImageService/ImageFsInfo
Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.218915582Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f1f90bb-c943-4cbe-ada3-290be07e07b3 name=/runtime.v1.RuntimeService/ListContainers
Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.218975096Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f1f90bb-c943-4cbe-ada3-290be07e07b3 name=/runtime.v1.RuntimeService/ListContainers
Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.219292381Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5fc58698cf6b99dd0822df09b23eec5303d8cc276f2686cec7e7e9451c8b9a7,PodSandboxId:f281d2831dd0b2a9dd27cbe28e438d3893facfdde33318d75e0e3112d2d7d992,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_CREATED,CreatedAt:1763885771130471452,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d498dc89-4czrb,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 542a36d2-e7f4-4a68-8a14-d26c69029ccd,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.p
orts: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75e1b2efb20e18e880e59f64bc49b3114d53ccde7e613edc5b4615dc84fcd0a9,PodSandboxId:d6bf7a0c9178de5aeded44ed3172fe8a5fa37b1637e181238bae040a3132ac32,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1763885628957386781,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16094845-c835-4494-a064-31053be1943b,},Annotations:map[string]string{io.kubernete
s.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a13ad04ce6225dcf840a6cf8802f4ba866d65bb5e34d4bf31ab7e5f17e7b741,PodSandboxId:88d41c059de83448dda19955ce8fb31c3489bea24a5f320b367b9d98641ffebf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1763885590231067098,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f5d5468e-0e81-49d7-8c
ef-aec9926db30e,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1fcfcb8eb46bc8c8ad289f19e3217cfb5ad9dfda4f22775c0a49639411e4285b,PodSandboxId:de8125f44caed4f5ad920cab9a0bb988de7dc2f16c74de14c821e650578a3134,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1763885581360265657,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-d2lnn,io.kubernetes.pod.namespace: ingress-nginx,io.
kubernetes.pod.uid: e3e96657-f191-4984-ad8c-72b0ab056c55,},Annotations:map[string]string{io.kubernetes.container.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:e41b23988404444894d83351fae6c2b44e55b7c753a80b8e5fd0a5e0fca26d59,PodSandboxId:adcafe8d23dcc115ab2ac8cc000e1264c58b4386ebd44aae9b79294b3ce1c6ea,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,Annotations:map[string]string{},UserSpecifi
edImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763885551420361747,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qjtrl,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c548cf97-ddd2-4a1a-919e-311e39bd3833,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53d0112d0fad9f107107787fcc4761e8dc0a95d6ec3ab85df4820ac9dcba53be,PodSandboxId:e6ecc919cd54c5979641c50f531570e7f0db93d499967b4365cf666601922407,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotat
ions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1763885550605397523,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-n8xfv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 50dea3aa-fc75-4df0-bb04-bd8fd77e7ff9,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de81c25adb70a06d2e34c5fd93db6dbd056e629e8c1a19d19296966543bb3794,PodSandboxId:3fed9ced7aa5025979185f08ffa8128f912cdbf3370def2dca177c87e468cb93,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd34
6a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1763885536633541503,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc33e34a-7ac6-484c-b0a7-430085041ff4,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82d5ef951538ac924b333da325abf735f2e52434e3c5dab819290dc703c0fa9f,PodSandboxId:17af1c9dedcd0272d4ffcb547936e10b9b74d0c546f2751eb7944aeacf774f79,Metadata:&ContainerMetadata{Name
:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1763885511479730934,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fafc19a5-6c67-4faa-af77-b5dc63837928,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:034b673ca1afef8954547ba3b46fd029c5a7e32e9cae3456c825536ee88059e6,PodSandboxId:b4dc96fcea260adda9b8ee394b9b2bb5c3afdf293214bf8627dd585930863e57,Metadata:&ContainerMetadata{Name:amd-gpu-dev
ice-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1763885509150592446,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-8vc9q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8295884f-da88-49f2-9084-a9c8cfc1e4d9,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c085c7e3c7e1d4180e3f556a2b13e400e1a3a39cd49b5d8a82e0e6cbb197ee2,PodSandboxId:4d8a9af25383570f4daeb138a79efa23e5ee969bce155aa8a528afeed7cce39e,Meta
data:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1763885502664071177,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gxw2m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c7ecbdf-e8c7-4ff9-9c2d-dc54c953605f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a32a377fb8fd085baa47ac0065a2a6c9b61233646d15f815186bfb912aaee0,PodSandboxId:e6983ca5f266bea92319da768810baecbeb05b50b53084f38979f587c025a089,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1763885501513009089,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cp69g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b6331ff-3dfb-46c8-b853-3ac13fdd22cc,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bc7808cbafa35ac18cf85d26bfed95c36a01bfae4fee82ff44e13e37accb2fb,PodSandboxId:9b65b771852981bff123c7c64aea210a8b531e3f1a3e167c3fcdf73979a4e982,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1763885489786436394,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-964416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa4d6f814c0c0a234c1829d41f9cc06b,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":
\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f0364c26ba8a2ffe836cbcc6d72ce91fb1532b3629b02515db50a6d4b466dc0,PodSandboxId:fb568a606e43974dcf74554272588ec98d2a159da91a96197ac316a5aba04b2b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1763885489807444229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-964416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efc48a5f24208a1a403f153c19e9b10a,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kube
rnetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39699de5a00c064cdca41c90eb8b78538e5879de76016c72552fe5d7db95d87e,PodSandboxId:5c6c286389717dc5b739c64240d1166c11fa677abe126f45c071987cabb0aafa,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1763885489772029608,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-964416,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: b3709f2b029d1230ca25347545eb530b,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e910ff123e32ce12c666332e542d611040ccdc568a9fc18717d44e9a60184ce,PodSandboxId:c6f20fa3ad6a2efc964c8e924a906253b4e17d98d581838dbe3aeb539efec671,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1763885489739229669,Labels:map[string]string{io.kubernetes.container.name: kube
-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-964416,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5bb7c82c50c3697588cc803d0c3e419,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7f1f90bb-c943-4cbe-ada3-290be07e07b3 name=/runtime.v1.RuntimeService/ListContainers
Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.224707843Z" level=info msg="Started container" PID=12564 containerID=d5fc58698cf6b99dd0822df09b23eec5303d8cc276f2686cec7e7e9451c8b9a7 description=default/hello-world-app-5d498dc89-4czrb/hello-world-app file="server/container_start.go:115" id=7384383d-3bab-464f-9a28-3c02251e8480 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f281d2831dd0b2a9dd27cbe28e438d3893facfdde33318d75e0e3112d2d7d992
Nov 23 08:16:11 addons-964416 crio[811]: time="2025-11-23 08:16:11.240462934Z" level=debug msg="Response: &StartContainerResponse{}" file="otel-collector/interceptors.go:74" id=7384383d-3bab-464f-9a28-3c02251e8480 name=/runtime.v1.RuntimeService/StartContainer
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
d5fc58698cf6b docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 Less than a second ago Running hello-world-app 0 f281d2831dd0b hello-world-app-5d498dc89-4czrb default
75e1b2efb20e1 docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 2 minutes ago Running nginx 0 d6bf7a0c9178d nginx default
3a13ad04ce622 gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 3 minutes ago Running busybox 0 88d41c059de83 busybox default
1fcfcb8eb46bc registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27 3 minutes ago Running controller 0 de8125f44caed ingress-nginx-controller-6c8bf45fb-d2lnn ingress-nginx
e41b239884044 884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45 3 minutes ago Exited patch 1 adcafe8d23dcc ingress-nginx-admission-patch-qjtrl ingress-nginx
53d0112d0fad9 registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f 3 minutes ago Exited create 0 e6ecc919cd54c ingress-nginx-admission-create-n8xfv ingress-nginx
de81c25adb70a docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7 3 minutes ago Running minikube-ingress-dns 0 3fed9ced7aa50 kube-ingress-dns-minikube kube-system
82d5ef951538a 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 4 minutes ago Running storage-provisioner 0 17af1c9dedcd0 storage-provisioner kube-system
034b673ca1afe docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f 4 minutes ago Running amd-gpu-device-plugin 0 b4dc96fcea260 amd-gpu-device-plugin-8vc9q kube-system
6c085c7e3c7e1 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969 4 minutes ago Running coredns 0 4d8a9af253835 coredns-66bc5c9577-gxw2m kube-system
33a32a377fb8f fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7 4 minutes ago Running kube-proxy 0 e6983ca5f266b kube-proxy-cp69g kube-system
7f0364c26ba8a 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813 4 minutes ago Running kube-scheduler 0 fb568a606e439 kube-scheduler-addons-964416 kube-system
9bc7808cbafa3 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115 4 minutes ago Running etcd 0 9b65b77185298 etcd-addons-964416 kube-system
39699de5a00c0 c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f 4 minutes ago Running kube-controller-manager 0 5c6c286389717 kube-controller-manager-addons-964416 kube-system
9e910ff123e32 c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97 4 minutes ago Running kube-apiserver 0 c6f20fa3ad6a2 kube-apiserver-addons-964416 kube-system
==> coredns [6c085c7e3c7e1d4180e3f556a2b13e400e1a3a39cd49b5d8a82e0e6cbb197ee2] <==
[INFO] 10.244.0.8:34929 - 37132 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000132878s
[INFO] 10.244.0.8:34929 - 47308 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000194175s
[INFO] 10.244.0.8:34929 - 54448 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000310838s
[INFO] 10.244.0.8:34929 - 46437 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000082882s
[INFO] 10.244.0.8:34929 - 1722 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000150189s
[INFO] 10.244.0.8:34929 - 14624 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000115232s
[INFO] 10.244.0.8:34929 - 3595 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000197893s
[INFO] 10.244.0.8:42096 - 49119 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000142563s
[INFO] 10.244.0.8:42096 - 48771 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000299107s
[INFO] 10.244.0.8:40663 - 3895 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000165114s
[INFO] 10.244.0.8:40663 - 3668 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00025171s
[INFO] 10.244.0.8:50148 - 57087 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000092353s
[INFO] 10.244.0.8:50148 - 56608 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000145897s
[INFO] 10.244.0.8:40736 - 1872 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000096571s
[INFO] 10.244.0.8:40736 - 1708 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000310129s
[INFO] 10.244.0.23:33329 - 55512 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000384926s
[INFO] 10.244.0.23:52635 - 27490 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00013239s
[INFO] 10.244.0.23:49599 - 45231 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124792s
[INFO] 10.244.0.23:43686 - 59657 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000084376s
[INFO] 10.244.0.23:41900 - 50560 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000091879s
[INFO] 10.244.0.23:49637 - 36509 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129987s
[INFO] 10.244.0.23:58010 - 5775 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.00180098s
[INFO] 10.244.0.23:42039 - 4844 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004089908s
[INFO] 10.244.0.26:52134 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000411446s
[INFO] 10.244.0.26:51430 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00012813s
==> describe nodes <==
Name: addons-964416
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=addons-964416
kubernetes.io/os=linux
minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
minikube.k8s.io/name=addons-964416
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_11_23T08_11_35_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
topology.hostpath.csi/node=addons-964416
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 23 Nov 2025 08:11:32 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: addons-964416
AcquireTime: <unset>
RenewTime: Sun, 23 Nov 2025 08:16:09 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sun, 23 Nov 2025 08:14:08 +0000 Sun, 23 Nov 2025 08:11:30 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 23 Nov 2025 08:14:08 +0000 Sun, 23 Nov 2025 08:11:30 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 23 Nov 2025 08:14:08 +0000 Sun, 23 Nov 2025 08:11:30 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sun, 23 Nov 2025 08:14:08 +0000 Sun, 23 Nov 2025 08:11:36 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.198
Hostname: addons-964416
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001788Ki
pods: 110
System Info:
Machine ID: 198921e33bb94b459dea69ff479a7843
System UUID: 198921e3-3bb9-4b45-9dea-69ff479a7843
Boot ID: ce72afb7-a3f6-4f51-b999-aef96396bed2
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.29.1
Kubelet Version: v1.34.1
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (13 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m4s
default hello-world-app-5d498dc89-4czrb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2s
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m26s
ingress-nginx ingress-nginx-controller-6c8bf45fb-d2lnn 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4m22s
kube-system amd-gpu-device-plugin-8vc9q 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m27s
kube-system coredns-66bc5c9577-gxw2m 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4m30s
kube-system etcd-addons-964416 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4m36s
kube-system kube-apiserver-addons-964416 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m37s
kube-system kube-controller-manager-addons-964416 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4m36s
kube-system kube-ingress-dns-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m24s
kube-system kube-proxy-cp69g 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m31s
kube-system kube-scheduler-addons-964416 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4m36s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m23s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m28s kube-proxy
Normal Starting 4m43s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m43s (x8 over 4m43s) kubelet Node addons-964416 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m43s (x8 over 4m43s) kubelet Node addons-964416 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m43s (x7 over 4m43s) kubelet Node addons-964416 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m43s kubelet Updated Node Allocatable limit across pods
Normal Starting 4m36s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 4m36s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 4m36s kubelet Node addons-964416 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m36s kubelet Node addons-964416 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m36s kubelet Node addons-964416 status is now: NodeHasSufficientPID
Normal NodeReady 4m35s kubelet Node addons-964416 status is now: NodeReady
Normal RegisteredNode 4m32s node-controller Node addons-964416 event: Registered Node addons-964416 in Controller
==> dmesg <==
[ +1.114548] kauditd_printk_skb: 321 callbacks suppressed
[ +1.396067] kauditd_printk_skb: 344 callbacks suppressed
[ +2.244772] kauditd_printk_skb: 347 callbacks suppressed
[Nov23 08:12] kauditd_printk_skb: 20 callbacks suppressed
[ +4.130214] kauditd_printk_skb: 23 callbacks suppressed
[ +7.696609] kauditd_printk_skb: 11 callbacks suppressed
[ +5.272444] kauditd_printk_skb: 41 callbacks suppressed
[ +5.189814] kauditd_printk_skb: 152 callbacks suppressed
[ +3.890564] kauditd_printk_skb: 91 callbacks suppressed
[ +3.450258] kauditd_printk_skb: 120 callbacks suppressed
[ +0.000089] kauditd_printk_skb: 20 callbacks suppressed
[ +0.000159] kauditd_printk_skb: 29 callbacks suppressed
[Nov23 08:13] kauditd_printk_skb: 53 callbacks suppressed
[ +2.499629] kauditd_printk_skb: 47 callbacks suppressed
[ +10.549604] kauditd_printk_skb: 17 callbacks suppressed
[ +5.926217] kauditd_printk_skb: 22 callbacks suppressed
[ +4.589167] kauditd_printk_skb: 39 callbacks suppressed
[ +0.000955] kauditd_printk_skb: 36 callbacks suppressed
[ +0.960938] kauditd_printk_skb: 147 callbacks suppressed
[ +2.480230] kauditd_printk_skb: 181 callbacks suppressed
[ +0.000254] kauditd_printk_skb: 102 callbacks suppressed
[Nov23 08:14] kauditd_printk_skb: 106 callbacks suppressed
[ +0.000067] kauditd_printk_skb: 22 callbacks suppressed
[ +7.861859] kauditd_printk_skb: 41 callbacks suppressed
[Nov23 08:16] kauditd_printk_skb: 147 callbacks suppressed
==> etcd [9bc7808cbafa35ac18cf85d26bfed95c36a01bfae4fee82ff44e13e37accb2fb] <==
{"level":"info","ts":"2025-11-23T08:12:58.263562Z","caller":"traceutil/trace.go:172","msg":"trace[346416027] linearizableReadLoop","detail":"{readStateIndex:1207; appliedIndex:1207; }","duration":"114.232107ms","start":"2025-11-23T08:12:58.149315Z","end":"2025-11-23T08:12:58.263547Z","steps":["trace[346416027] 'read index received' (duration: 114.225272ms)","trace[346416027] 'applied index is now lower than readState.Index' (duration: 5.663µs)"],"step_count":2}
{"level":"info","ts":"2025-11-23T08:12:58.263681Z","caller":"traceutil/trace.go:172","msg":"trace[946567403] transaction","detail":"{read_only:false; response_revision:1176; number_of_response:1; }","duration":"201.411685ms","start":"2025-11-23T08:12:58.062260Z","end":"2025-11-23T08:12:58.263672Z","steps":["trace[946567403] 'process raft request' (duration: 201.312454ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-23T08:12:58.263910Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.599451ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourceclaimtemplates\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-23T08:12:58.263994Z","caller":"traceutil/trace.go:172","msg":"trace[854540074] range","detail":"{range_begin:/registry/resourceclaimtemplates; range_end:; response_count:0; response_revision:1176; }","duration":"114.693979ms","start":"2025-11-23T08:12:58.149293Z","end":"2025-11-23T08:12:58.263987Z","steps":["trace[854540074] 'agreement among raft nodes before linearized reading' (duration: 114.581576ms)"],"step_count":1}
{"level":"info","ts":"2025-11-23T08:13:01.194625Z","caller":"traceutil/trace.go:172","msg":"trace[78832867] linearizableReadLoop","detail":"{readStateIndex:1212; appliedIndex:1212; }","duration":"133.69601ms","start":"2025-11-23T08:13:01.060912Z","end":"2025-11-23T08:13:01.194608Z","steps":["trace[78832867] 'read index received' (duration: 133.690869ms)","trace[78832867] 'applied index is now lower than readState.Index' (duration: 4.23µs)"],"step_count":2}
{"level":"info","ts":"2025-11-23T08:13:01.195030Z","caller":"traceutil/trace.go:172","msg":"trace[1300969811] transaction","detail":"{read_only:false; response_revision:1180; number_of_response:1; }","duration":"184.536657ms","start":"2025-11-23T08:13:01.010482Z","end":"2025-11-23T08:13:01.195018Z","steps":["trace[1300969811] 'process raft request' (duration: 184.176165ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-23T08:13:01.195245Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.353424ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-23T08:13:01.195265Z","caller":"traceutil/trace.go:172","msg":"trace[1652862] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1180; }","duration":"134.382028ms","start":"2025-11-23T08:13:01.060877Z","end":"2025-11-23T08:13:01.195259Z","steps":["trace[1652862] 'agreement among raft nodes before linearized reading' (duration: 134.078381ms)"],"step_count":1}
{"level":"info","ts":"2025-11-23T08:13:04.472297Z","caller":"traceutil/trace.go:172","msg":"trace[1756090634] transaction","detail":"{read_only:false; response_revision:1196; number_of_response:1; }","duration":"168.545941ms","start":"2025-11-23T08:13:04.303738Z","end":"2025-11-23T08:13:04.472283Z","steps":["trace[1756090634] 'process raft request' (duration: 168.408372ms)"],"step_count":1}
{"level":"info","ts":"2025-11-23T08:13:31.556461Z","caller":"traceutil/trace.go:172","msg":"trace[53365910] linearizableReadLoop","detail":"{readStateIndex:1404; appliedIndex:1404; }","duration":"217.993552ms","start":"2025-11-23T08:13:31.338452Z","end":"2025-11-23T08:13:31.556446Z","steps":["trace[53365910] 'read index received' (duration: 217.988589ms)","trace[53365910] 'applied index is now lower than readState.Index' (duration: 4.355µs)"],"step_count":2}
{"level":"warn","ts":"2025-11-23T08:13:31.556665Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"218.233943ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" limit:1 ","response":"range_response_count:1 size:822"}
{"level":"info","ts":"2025-11-23T08:13:31.556686Z","caller":"traceutil/trace.go:172","msg":"trace[118254718] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1365; }","duration":"218.269244ms","start":"2025-11-23T08:13:31.338410Z","end":"2025-11-23T08:13:31.556680Z","steps":["trace[118254718] 'agreement among raft nodes before linearized reading' (duration: 218.144533ms)"],"step_count":1}
{"level":"info","ts":"2025-11-23T08:13:31.556745Z","caller":"traceutil/trace.go:172","msg":"trace[408040910] transaction","detail":"{read_only:false; response_revision:1366; number_of_response:1; }","duration":"288.6452ms","start":"2025-11-23T08:13:31.268083Z","end":"2025-11-23T08:13:31.556728Z","steps":["trace[408040910] 'process raft request' (duration: 288.412942ms)"],"step_count":1}
{"level":"info","ts":"2025-11-23T08:13:42.824992Z","caller":"traceutil/trace.go:172","msg":"trace[1346268108] linearizableReadLoop","detail":"{readStateIndex:1509; appliedIndex:1509; }","duration":"180.761231ms","start":"2025-11-23T08:13:42.644217Z","end":"2025-11-23T08:13:42.824978Z","steps":["trace[1346268108] 'read index received' (duration: 180.755347ms)","trace[1346268108] 'applied index is now lower than readState.Index' (duration: 4.658µs)"],"step_count":2}
{"level":"warn","ts":"2025-11-23T08:13:42.825120Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"180.889323ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
{"level":"info","ts":"2025-11-23T08:13:42.825140Z","caller":"traceutil/trace.go:172","msg":"trace[1689261646] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1466; }","duration":"180.921565ms","start":"2025-11-23T08:13:42.644212Z","end":"2025-11-23T08:13:42.825134Z","steps":["trace[1689261646] 'agreement among raft nodes before linearized reading' (duration: 180.865956ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-23T08:13:42.826088Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.64829ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
{"level":"info","ts":"2025-11-23T08:13:42.826242Z","caller":"traceutil/trace.go:172","msg":"trace[934387840] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1467; }","duration":"102.809318ms","start":"2025-11-23T08:13:42.723425Z","end":"2025-11-23T08:13:42.826234Z","steps":["trace[934387840] 'agreement among raft nodes before linearized reading' (duration: 102.588071ms)"],"step_count":1}
{"level":"info","ts":"2025-11-23T08:13:42.826347Z","caller":"traceutil/trace.go:172","msg":"trace[994045618] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1467; }","duration":"224.338124ms","start":"2025-11-23T08:13:42.601996Z","end":"2025-11-23T08:13:42.826335Z","steps":["trace[994045618] 'process raft request' (duration: 223.675708ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-23T08:13:44.521336Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"212.355564ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/yakd-dashboard/yakd-dashboard-5ff678cb9\" limit:1 ","response":"range_response_count:1 size:3621"}
{"level":"info","ts":"2025-11-23T08:13:44.525548Z","caller":"traceutil/trace.go:172","msg":"trace[1367152762] range","detail":"{range_begin:/registry/replicasets/yakd-dashboard/yakd-dashboard-5ff678cb9; range_end:; response_count:1; response_revision:1499; }","duration":"216.563053ms","start":"2025-11-23T08:13:44.308964Z","end":"2025-11-23T08:13:44.525527Z","steps":["trace[1367152762] 'range keys from in-memory index tree' (duration: 205.554285ms)"],"step_count":1}
{"level":"warn","ts":"2025-11-23T08:13:44.522154Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.200821ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" limit:1 ","response":"range_response_count:1 size:1412"}
{"level":"info","ts":"2025-11-23T08:13:44.528192Z","caller":"traceutil/trace.go:172","msg":"trace[1277894628] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:1; response_revision:1499; }","duration":"138.251632ms","start":"2025-11-23T08:13:44.389932Z","end":"2025-11-23T08:13:44.528183Z","steps":["trace[1277894628] 'range keys from in-memory index tree' (duration: 132.11237ms)"],"step_count":1}
{"level":"info","ts":"2025-11-23T08:13:44.864965Z","caller":"traceutil/trace.go:172","msg":"trace[785793712] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1506; }","duration":"103.644121ms","start":"2025-11-23T08:13:44.761307Z","end":"2025-11-23T08:13:44.864951Z","steps":["trace[785793712] 'process raft request' (duration: 103.490123ms)"],"step_count":1}
{"level":"info","ts":"2025-11-23T08:13:48.755944Z","caller":"traceutil/trace.go:172","msg":"trace[1931102737] transaction","detail":"{read_only:false; response_revision:1552; number_of_response:1; }","duration":"116.768955ms","start":"2025-11-23T08:13:48.639160Z","end":"2025-11-23T08:13:48.755929Z","steps":["trace[1931102737] 'process raft request' (duration: 116.591819ms)"],"step_count":1}
==> kernel <==
08:16:11 up 5 min, 0 users, load average: 0.67, 1.10, 0.58
Linux addons-964416 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [9e910ff123e32ce12c666332e542d611040ccdc568a9fc18717d44e9a60184ce] <==
W1123 08:12:09.822322 1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1123 08:12:09.852214 1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
W1123 08:12:09.898957 1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
W1123 08:12:09.926132 1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
E1123 08:13:17.611749 1 conn.go:339] Error on socket receive: read tcp 192.168.39.198:8443->192.168.39.1:42274: use of closed network connection
I1123 08:13:26.928315 1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.34.250"}
I1123 08:13:45.778098 1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
I1123 08:13:46.048254 1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.133.87"}
I1123 08:13:53.531513 1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
I1123 08:14:08.484523 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
E1123 08:14:09.917549 1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
I1123 08:14:22.934753 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1123 08:14:22.935163 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1123 08:14:23.002373 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1123 08:14:23.002493 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1123 08:14:23.016104 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1123 08:14:23.016165 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1123 08:14:23.037056 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1123 08:14:23.037152 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
I1123 08:14:23.116975 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
I1123 08:14:23.117011 1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
W1123 08:14:24.016591 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
W1123 08:14:24.117369 1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
W1123 08:14:24.161684 1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
I1123 08:16:09.988338 1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.188.147"}
==> kube-controller-manager [39699de5a00c064cdca41c90eb8b78538e5879de76016c72552fe5d7db95d87e] <==
E1123 08:14:31.904649 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1123 08:14:33.620685 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1123 08:14:33.621940 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1123 08:14:39.225085 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1123 08:14:39.226548 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
I1123 08:14:39.941436 1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
I1123 08:14:39.941545 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1123 08:14:39.993205 1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
I1123 08:14:39.993274 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
E1123 08:14:40.934382 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1123 08:14:40.936279 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1123 08:14:42.837216 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1123 08:14:42.838281 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1123 08:14:59.683130 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1123 08:14:59.684236 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1123 08:15:00.738026 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1123 08:15:00.738944 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1123 08:15:07.477705 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1123 08:15:07.479413 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1123 08:15:33.465920 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1123 08:15:33.467191 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1123 08:15:42.284918 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1123 08:15:42.286215 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
E1123 08:15:48.254660 1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
E1123 08:15:48.255727 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
==> kube-proxy [33a32a377fb8fd085baa47ac0065a2a6c9b61233646d15f815186bfb912aaee0] <==
I1123 08:11:42.151021 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1123 08:11:42.252307 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1123 08:11:42.252355 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.198"]
E1123 08:11:42.252424 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1123 08:11:42.542292 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1123 08:11:42.542359 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1123 08:11:42.542385 1 server_linux.go:132] "Using iptables Proxier"
I1123 08:11:42.559573 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1123 08:11:42.561059 1 server.go:527] "Version info" version="v1.34.1"
I1123 08:11:42.561093 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1123 08:11:42.577158 1 config.go:200] "Starting service config controller"
I1123 08:11:42.578438 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1123 08:11:42.578716 1 config.go:106] "Starting endpoint slice config controller"
I1123 08:11:42.578724 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1123 08:11:42.578978 1 config.go:403] "Starting serviceCIDR config controller"
I1123 08:11:42.578986 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1123 08:11:42.587794 1 config.go:309] "Starting node config controller"
I1123 08:11:42.587976 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1123 08:11:42.679331 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1123 08:11:42.679399 1 shared_informer.go:356] "Caches are synced" controller="service config"
I1123 08:11:42.679440 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1123 08:11:42.689078 1 shared_informer.go:356] "Caches are synced" controller="node config"
==> kube-scheduler [7f0364c26ba8a2ffe836cbcc6d72ce91fb1532b3629b02515db50a6d4b466dc0] <==
I1123 08:11:33.372123 1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
I1123 08:11:33.372736 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1123 08:11:33.373161 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I1123 08:11:33.372769 1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
E1123 08:11:33.376216 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
E1123 08:11:33.376324 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
E1123 08:11:33.380328 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
E1123 08:11:33.380529 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
E1123 08:11:33.380601 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
E1123 08:11:33.380654 1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
E1123 08:11:33.380687 1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
E1123 08:11:33.380743 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
E1123 08:11:33.380778 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
E1123 08:11:33.380889 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
E1123 08:11:33.380916 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
E1123 08:11:33.380998 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
E1123 08:11:33.381620 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
E1123 08:11:33.381629 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
E1123 08:11:33.381732 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
E1123 08:11:33.381799 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
E1123 08:11:33.381847 1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
E1123 08:11:33.381950 1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
E1123 08:11:33.382226 1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
E1123 08:11:34.288353 1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
I1123 08:11:36.673912 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Nov 23 08:14:36 addons-964416 kubelet[1501]: I1123 08:14:36.797555 1501 scope.go:117] "RemoveContainer" containerID="4febd74e4a681e8e173be6c68618b2fbf4f51353856d4df6171b3b4a79c388cd"
Nov 23 08:14:36 addons-964416 kubelet[1501]: I1123 08:14:36.914451 1501 scope.go:117] "RemoveContainer" containerID="417a81c3c28cbe904ab69f5fbf17edb0898de82f819a56e9a0a9dda73a872883"
Nov 23 08:14:37 addons-964416 kubelet[1501]: I1123 08:14:37.030888 1501 scope.go:117] "RemoveContainer" containerID="48e018b18ce521f79c9dcc1b911ab68af1b7a68e6427e053ab8b723ad07af9af"
Nov 23 08:14:37 addons-964416 kubelet[1501]: I1123 08:14:37.151867 1501 scope.go:117] "RemoveContainer" containerID="f739082aca64711f4bb3e4a6759ba61e34f287d1886b3d6484e74aac69600482"
Nov 23 08:14:45 addons-964416 kubelet[1501]: E1123 08:14:45.695723 1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763885685695259039 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588566} inodes_used:{value:201}}"
Nov 23 08:14:45 addons-964416 kubelet[1501]: E1123 08:14:45.695751 1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763885685695259039 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588566} inodes_used:{value:201}}"
Nov 23 08:14:55 addons-964416 kubelet[1501]: E1123 08:14:55.699101 1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763885695698618394 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588566} inodes_used:{value:201}}"
Nov 23 08:14:55 addons-964416 kubelet[1501]: E1123 08:14:55.699169 1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763885695698618394 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588566} inodes_used:{value:201}}"
Nov 23 08:15:05 addons-964416 kubelet[1501]: E1123 08:15:05.702892 1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763885705702400592 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588566} inodes_used:{value:201}}"
Nov 23 08:15:05 addons-964416 kubelet[1501]: E1123 08:15:05.702925 1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763885705702400592 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588566} inodes_used:{value:201}}"
Nov 23 08:15:15 addons-964416 kubelet[1501]: E1123 08:15:15.707342 1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763885715706085904 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588566} inodes_used:{value:201}}"
Nov 23 08:15:15 addons-964416 kubelet[1501]: E1123 08:15:15.707393 1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763885715706085904 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588566} inodes_used:{value:201}}"
Nov 23 08:15:25 addons-964416 kubelet[1501]: E1123 08:15:25.710185 1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763885725709375223 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588566} inodes_used:{value:201}}"
Nov 23 08:15:25 addons-964416 kubelet[1501]: E1123 08:15:25.710396 1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763885725709375223 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588566} inodes_used:{value:201}}"
Nov 23 08:15:31 addons-964416 kubelet[1501]: I1123 08:15:31.256646 1501 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
Nov 23 08:15:32 addons-964416 kubelet[1501]: I1123 08:15:32.255443 1501 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-8vc9q" secret="" err="secret \"gcp-auth\" not found"
Nov 23 08:15:35 addons-964416 kubelet[1501]: E1123 08:15:35.716524 1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763885735716122413 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588566} inodes_used:{value:201}}"
Nov 23 08:15:35 addons-964416 kubelet[1501]: E1123 08:15:35.716545 1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763885735716122413 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588566} inodes_used:{value:201}}"
Nov 23 08:15:45 addons-964416 kubelet[1501]: E1123 08:15:45.719632 1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763885745719030901 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588566} inodes_used:{value:201}}"
Nov 23 08:15:45 addons-964416 kubelet[1501]: E1123 08:15:45.719679 1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763885745719030901 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588566} inodes_used:{value:201}}"
Nov 23 08:15:55 addons-964416 kubelet[1501]: E1123 08:15:55.722350 1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763885755721926782 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588566} inodes_used:{value:201}}"
Nov 23 08:15:55 addons-964416 kubelet[1501]: E1123 08:15:55.722396 1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763885755721926782 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588566} inodes_used:{value:201}}"
Nov 23 08:16:05 addons-964416 kubelet[1501]: E1123 08:16:05.725299 1501 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1763885765724792008 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588566} inodes_used:{value:201}}"
Nov 23 08:16:05 addons-964416 kubelet[1501]: E1123 08:16:05.725344 1501 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1763885765724792008 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:588566} inodes_used:{value:201}}"
Nov 23 08:16:10 addons-964416 kubelet[1501]: I1123 08:16:10.051401 1501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq2d6\" (UniqueName: \"kubernetes.io/projected/542a36d2-e7f4-4a68-8a14-d26c69029ccd-kube-api-access-vq2d6\") pod \"hello-world-app-5d498dc89-4czrb\" (UID: \"542a36d2-e7f4-4a68-8a14-d26c69029ccd\") " pod="default/hello-world-app-5d498dc89-4czrb"
==> storage-provisioner [82d5ef951538ac924b333da325abf735f2e52434e3c5dab819290dc703c0fa9f] <==
W1123 08:15:47.613227 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1123 08:15:49.616726 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1123 08:15:49.622571 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1123 08:15:51.626423 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1123 08:15:51.631760 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1123 08:15:53.636351 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1123 08:15:53.645038 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1123 08:15:55.649575 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1123 08:15:55.656158 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1123 08:15:57.659480 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1123 08:15:57.668064 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1123 08:15:59.673057 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1123 08:15:59.679089 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1123 08:16:01.682521 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1123 08:16:01.690760 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1123 08:16:03.694616 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1123 08:16:03.699719 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1123 08:16:05.703550 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1123 08:16:05.712982 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1123 08:16:07.717251 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1123 08:16:07.722744 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1123 08:16:09.727660 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1123 08:16:09.735082 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1123 08:16:11.741064 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1123 08:16:11.745934 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:262: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-964416 -n addons-964416
helpers_test.go:269: (dbg) Run: kubectl --context addons-964416 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-n8xfv ingress-nginx-admission-patch-qjtrl
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run: kubectl --context addons-964416 describe pod ingress-nginx-admission-create-n8xfv ingress-nginx-admission-patch-qjtrl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-964416 describe pod ingress-nginx-admission-create-n8xfv ingress-nginx-admission-patch-qjtrl: exit status 1 (60.34429ms)
** stderr **
Error from server (NotFound): pods "ingress-nginx-admission-create-n8xfv" not found
Error from server (NotFound): pods "ingress-nginx-admission-patch-qjtrl" not found
** /stderr **
helpers_test.go:287: kubectl --context addons-964416 describe pod ingress-nginx-admission-create-n8xfv ingress-nginx-admission-patch-qjtrl: exit status 1
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-964416 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-964416 addons disable ingress-dns --alsologtostderr -v=1: (1.784819733s)
addons_test.go:1053: (dbg) Run: out/minikube-linux-amd64 -p addons-964416 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-964416 addons disable ingress --alsologtostderr -v=1: (7.745813275s)
--- FAIL: TestAddons/parallel/Ingress (156.34s)