=== RUN TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run: out/minikube-linux-amd64 start -p functional-329536 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1217 08:23:42.380836 69293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-65389/.minikube/profiles/addons-622548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:25:04.305707 69293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-65389/.minikube/profiles/addons-622548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:27:20.441071 69293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-65389/.minikube/profiles/addons-622548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 08:27:48.154434 69293 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22182-65389/.minikube/profiles/addons-622548/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-329536 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (4m41.885822108s)
-- stdout --
* [functional-329536] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
- MINIKUBE_LOCATION=22182
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/22182-65389/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-65389/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on existing profile
* Starting "functional-329536" primary control-plane node in "functional-329536" cluster
- apiserver.enable-admission-plugins=NamespaceAutoProvision
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
-- /stdout --
** stderr **
X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-amd64 start -p functional-329536 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:776: restart took 4m41.886074881s for "functional-329536" cluster.
I1217 08:28:05.482519 69293 config.go:182] Loaded profile config "functional-329536": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======> post-mortem[TestFunctional/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run: out/minikube-linux-amd64 status --format={{.Host}} -p functional-329536 -n functional-329536
helpers_test.go:253: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======> post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run: out/minikube-linux-amd64 -p functional-329536 logs -n 25
helpers_test.go:261: TestFunctional/serial/ExtraConfig logs:
-- stdout --
==> Audit <==
┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
│ COMMAND │ ARGS │ PROFILE │ USER │ VERSION │ START TIME │ END TIME │
├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
│ unpause │ nospam-412325 --log_dir /tmp/nospam-412325 unpause │ nospam-412325 │ jenkins │ v1.37.0 │ 17 Dec 25 08:20 UTC │ 17 Dec 25 08:20 UTC │
│ unpause │ nospam-412325 --log_dir /tmp/nospam-412325 unpause │ nospam-412325 │ jenkins │ v1.37.0 │ 17 Dec 25 08:20 UTC │ 17 Dec 25 08:20 UTC │
│ unpause │ nospam-412325 --log_dir /tmp/nospam-412325 unpause │ nospam-412325 │ jenkins │ v1.37.0 │ 17 Dec 25 08:20 UTC │ 17 Dec 25 08:20 UTC │
│ stop │ nospam-412325 --log_dir /tmp/nospam-412325 stop │ nospam-412325 │ jenkins │ v1.37.0 │ 17 Dec 25 08:20 UTC │ 17 Dec 25 08:20 UTC │
│ stop │ nospam-412325 --log_dir /tmp/nospam-412325 stop │ nospam-412325 │ jenkins │ v1.37.0 │ 17 Dec 25 08:20 UTC │ 17 Dec 25 08:20 UTC │
│ stop │ nospam-412325 --log_dir /tmp/nospam-412325 stop │ nospam-412325 │ jenkins │ v1.37.0 │ 17 Dec 25 08:20 UTC │ 17 Dec 25 08:20 UTC │
│ delete │ -p nospam-412325 │ nospam-412325 │ jenkins │ v1.37.0 │ 17 Dec 25 08:20 UTC │ 17 Dec 25 08:20 UTC │
│ start │ -p functional-329536 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2 │ functional-329536 │ jenkins │ v1.37.0 │ 17 Dec 25 08:20 UTC │ 17 Dec 25 08:22 UTC │
│ start │ -p functional-329536 --alsologtostderr -v=8 │ functional-329536 │ jenkins │ v1.37.0 │ 17 Dec 25 08:22 UTC │ 17 Dec 25 08:23 UTC │
│ cache │ functional-329536 cache add registry.k8s.io/pause:3.1 │ functional-329536 │ jenkins │ v1.37.0 │ 17 Dec 25 08:23 UTC │ 17 Dec 25 08:23 UTC │
│ cache │ functional-329536 cache add registry.k8s.io/pause:3.3 │ functional-329536 │ jenkins │ v1.37.0 │ 17 Dec 25 08:23 UTC │ 17 Dec 25 08:23 UTC │
│ cache │ functional-329536 cache add registry.k8s.io/pause:latest │ functional-329536 │ jenkins │ v1.37.0 │ 17 Dec 25 08:23 UTC │ 17 Dec 25 08:23 UTC │
│ cache │ functional-329536 cache add minikube-local-cache-test:functional-329536 │ functional-329536 │ jenkins │ v1.37.0 │ 17 Dec 25 08:23 UTC │ 17 Dec 25 08:23 UTC │
│ cache │ functional-329536 cache delete minikube-local-cache-test:functional-329536 │ functional-329536 │ jenkins │ v1.37.0 │ 17 Dec 25 08:23 UTC │ 17 Dec 25 08:23 UTC │
│ cache │ delete registry.k8s.io/pause:3.3 │ minikube │ jenkins │ v1.37.0 │ 17 Dec 25 08:23 UTC │ 17 Dec 25 08:23 UTC │
│ cache │ list │ minikube │ jenkins │ v1.37.0 │ 17 Dec 25 08:23 UTC │ 17 Dec 25 08:23 UTC │
│ ssh │ functional-329536 ssh sudo crictl images │ functional-329536 │ jenkins │ v1.37.0 │ 17 Dec 25 08:23 UTC │ 17 Dec 25 08:23 UTC │
│ ssh │ functional-329536 ssh sudo docker rmi registry.k8s.io/pause:latest │ functional-329536 │ jenkins │ v1.37.0 │ 17 Dec 25 08:23 UTC │ 17 Dec 25 08:23 UTC │
│ ssh │ functional-329536 ssh sudo crictl inspecti registry.k8s.io/pause:latest │ functional-329536 │ jenkins │ v1.37.0 │ 17 Dec 25 08:23 UTC │ │
│ cache │ functional-329536 cache reload │ functional-329536 │ jenkins │ v1.37.0 │ 17 Dec 25 08:23 UTC │ 17 Dec 25 08:23 UTC │
│ ssh │ functional-329536 ssh sudo crictl inspecti registry.k8s.io/pause:latest │ functional-329536 │ jenkins │ v1.37.0 │ 17 Dec 25 08:23 UTC │ 17 Dec 25 08:23 UTC │
│ cache │ delete registry.k8s.io/pause:3.1 │ minikube │ jenkins │ v1.37.0 │ 17 Dec 25 08:23 UTC │ 17 Dec 25 08:23 UTC │
│ cache │ delete registry.k8s.io/pause:latest │ minikube │ jenkins │ v1.37.0 │ 17 Dec 25 08:23 UTC │ 17 Dec 25 08:23 UTC │
│ kubectl │ functional-329536 kubectl -- --context functional-329536 get pods │ functional-329536 │ jenkins │ v1.37.0 │ 17 Dec 25 08:23 UTC │ 17 Dec 25 08:23 UTC │
│ start │ -p functional-329536 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all │ functional-329536 │ jenkins │ v1.37.0 │ 17 Dec 25 08:23 UTC │ │
└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
==> Last Start <==
Log file created at: 2025/12/17 08:23:23
Running on machine: ubuntu-20-agent-4
Binary: Built with gc go1.25.5 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1217 08:23:23.649716 74413 out.go:360] Setting OutFile to fd 1 ...
I1217 08:23:23.649971 74413 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:23:23.649975 74413 out.go:374] Setting ErrFile to fd 2...
I1217 08:23:23.649978 74413 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 08:23:23.650217 74413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22182-65389/.minikube/bin
I1217 08:23:23.650639 74413 out.go:368] Setting JSON to false
I1217 08:23:23.651461 74413 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":7556,"bootTime":1765952248,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
I1217 08:23:23.651512 74413 start.go:143] virtualization: kvm guest
I1217 08:23:23.653968 74413 out.go:179] * [functional-329536] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
I1217 08:23:23.655122 74413 out.go:179] - MINIKUBE_LOCATION=22182
I1217 08:23:23.655109 74413 notify.go:221] Checking for updates...
I1217 08:23:23.656360 74413 out.go:179] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1217 08:23:23.657651 74413 out.go:179] - KUBECONFIG=/home/jenkins/minikube-integration/22182-65389/kubeconfig
I1217 08:23:23.658820 74413 out.go:179] - MINIKUBE_HOME=/home/jenkins/minikube-integration/22182-65389/.minikube
I1217 08:23:23.660048 74413 out.go:179] - MINIKUBE_BIN=out/minikube-linux-amd64
I1217 08:23:23.661305 74413 out.go:179] - MINIKUBE_FORCE_SYSTEMD=
I1217 08:23:23.663170 74413 config.go:182] Loaded profile config "functional-329536": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1217 08:23:23.663278 74413 driver.go:422] Setting default libvirt URI to qemu:///system
I1217 08:23:23.693723 74413 out.go:179] * Using the kvm2 driver based on existing profile
I1217 08:23:23.694840 74413 start.go:309] selected driver: kvm2
I1217 08:23:23.694846 74413 start.go:927] validating driver "kvm2" against &{Name:functional-329536 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-329536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1217 08:23:23.694923 74413 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1217 08:23:23.695817 74413 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1217 08:23:23.695839 74413 cni.go:84] Creating CNI manager for ""
I1217 08:23:23.695893 74413 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1217 08:23:23.695936 74413 start.go:353] cluster config:
{Name:functional-329536 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-329536 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1217 08:23:23.696024 74413 iso.go:125] acquiring lock: {Name:mk7833ab72811d40adde681b338ca296ff609508 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1217 08:23:23.697306 74413 out.go:179] * Starting "functional-329536" primary control-plane node in "functional-329536" cluster
I1217 08:23:23.698366 74413 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
I1217 08:23:23.698390 74413 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22182-65389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4
I1217 08:23:23.698403 74413 cache.go:65] Caching tarball of preloaded images
I1217 08:23:23.698475 74413 preload.go:238] Found /home/jenkins/minikube-integration/22182-65389/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1217 08:23:23.698482 74413 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on docker
I1217 08:23:23.698561 74413 profile.go:143] Saving config to /home/jenkins/minikube-integration/22182-65389/.minikube/profiles/functional-329536/config.json ...
I1217 08:23:23.698738 74413 start.go:360] acquireMachinesLock for functional-329536: {Name:mk83ed3859db333a3c72f28c083a5837b12781d2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1217 08:23:23.698775 74413 start.go:364] duration metric: took 25.369µs to acquireMachinesLock for "functional-329536"
I1217 08:23:23.698784 74413 start.go:96] Skipping create...Using existing machine configuration
I1217 08:23:23.698788 74413 fix.go:54] fixHost starting:
I1217 08:23:23.700520 74413 fix.go:112] recreateIfNeeded on functional-329536: state=Running err=<nil>
W1217 08:23:23.700535 74413 fix.go:138] unexpected machine state, will restart: <nil>
I1217 08:23:23.701951 74413 out.go:252] * Updating the running kvm2 "functional-329536" VM ...
I1217 08:23:23.701968 74413 machine.go:94] provisionDockerMachine start ...
I1217 08:23:23.704017 74413 main.go:143] libmachine: domain functional-329536 has defined MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:23.704355 74413 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c3:28:86", ip: ""} in network mk-functional-329536: {Iface:virbr1 ExpiryTime:2025-12-17 09:21:11 +0000 UTC Type:0 Mac:52:54:00:c3:28:86 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-329536 Clientid:01:52:54:00:c3:28:86}
I1217 08:23:23.704375 74413 main.go:143] libmachine: domain functional-329536 has defined IP address 192.168.39.217 and MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:23.704505 74413 main.go:143] libmachine: Using SSH client type: native
I1217 08:23:23.704572 74413 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil> [] 0s} 192.168.39.217 22 <nil> <nil>}
I1217 08:23:23.704576 74413 main.go:143] libmachine: About to run SSH command:
hostname
I1217 08:23:23.822598 74413 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-329536
I1217 08:23:23.822622 74413 buildroot.go:166] provisioning hostname "functional-329536"
I1217 08:23:23.825448 74413 main.go:143] libmachine: domain functional-329536 has defined MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:23.825832 74413 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c3:28:86", ip: ""} in network mk-functional-329536: {Iface:virbr1 ExpiryTime:2025-12-17 09:21:11 +0000 UTC Type:0 Mac:52:54:00:c3:28:86 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-329536 Clientid:01:52:54:00:c3:28:86}
I1217 08:23:23.825848 74413 main.go:143] libmachine: domain functional-329536 has defined IP address 192.168.39.217 and MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:23.826042 74413 main.go:143] libmachine: Using SSH client type: native
I1217 08:23:23.826168 74413 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil> [] 0s} 192.168.39.217 22 <nil> <nil>}
I1217 08:23:23.826179 74413 main.go:143] libmachine: About to run SSH command:
sudo hostname functional-329536 && echo "functional-329536" | sudo tee /etc/hostname
I1217 08:23:23.960594 74413 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-329536
I1217 08:23:23.963528 74413 main.go:143] libmachine: domain functional-329536 has defined MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:23.963931 74413 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c3:28:86", ip: ""} in network mk-functional-329536: {Iface:virbr1 ExpiryTime:2025-12-17 09:21:11 +0000 UTC Type:0 Mac:52:54:00:c3:28:86 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-329536 Clientid:01:52:54:00:c3:28:86}
I1217 08:23:23.963952 74413 main.go:143] libmachine: domain functional-329536 has defined IP address 192.168.39.217 and MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:23.964159 74413 main.go:143] libmachine: Using SSH client type: native
I1217 08:23:23.964257 74413 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil> [] 0s} 192.168.39.217 22 <nil> <nil>}
I1217 08:23:23.964267 74413 main.go:143] libmachine: About to run SSH command:
if ! grep -xq '.*\sfunctional-329536' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-329536/g' /etc/hosts;
else
echo '127.0.1.1 functional-329536' | sudo tee -a /etc/hosts;
fi
fi
I1217 08:23:24.085709 74413 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1217 08:23:24.085727 74413 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22182-65389/.minikube CaCertPath:/home/jenkins/minikube-integration/22182-65389/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22182-65389/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22182-65389/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22182-65389/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22182-65389/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22182-65389/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22182-65389/.minikube}
I1217 08:23:24.085741 74413 buildroot.go:174] setting up certificates
I1217 08:23:24.085749 74413 provision.go:84] configureAuth start
I1217 08:23:24.088519 74413 main.go:143] libmachine: domain functional-329536 has defined MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:24.088903 74413 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c3:28:86", ip: ""} in network mk-functional-329536: {Iface:virbr1 ExpiryTime:2025-12-17 09:21:11 +0000 UTC Type:0 Mac:52:54:00:c3:28:86 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-329536 Clientid:01:52:54:00:c3:28:86}
I1217 08:23:24.088919 74413 main.go:143] libmachine: domain functional-329536 has defined IP address 192.168.39.217 and MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:24.091340 74413 main.go:143] libmachine: domain functional-329536 has defined MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:24.091702 74413 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c3:28:86", ip: ""} in network mk-functional-329536: {Iface:virbr1 ExpiryTime:2025-12-17 09:21:11 +0000 UTC Type:0 Mac:52:54:00:c3:28:86 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-329536 Clientid:01:52:54:00:c3:28:86}
I1217 08:23:24.091718 74413 main.go:143] libmachine: domain functional-329536 has defined IP address 192.168.39.217 and MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:24.091851 74413 provision.go:143] copyHostCerts
I1217 08:23:24.091915 74413 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-65389/.minikube/ca.pem, removing ...
I1217 08:23:24.091930 74413 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-65389/.minikube/ca.pem
I1217 08:23:24.092015 74413 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-65389/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22182-65389/.minikube/ca.pem (1078 bytes)
I1217 08:23:24.092147 74413 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-65389/.minikube/cert.pem, removing ...
I1217 08:23:24.092152 74413 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-65389/.minikube/cert.pem
I1217 08:23:24.092188 74413 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-65389/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22182-65389/.minikube/cert.pem (1123 bytes)
I1217 08:23:24.092268 74413 exec_runner.go:144] found /home/jenkins/minikube-integration/22182-65389/.minikube/key.pem, removing ...
I1217 08:23:24.092271 74413 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22182-65389/.minikube/key.pem
I1217 08:23:24.092297 74413 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22182-65389/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22182-65389/.minikube/key.pem (1675 bytes)
I1217 08:23:24.092348 74413 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22182-65389/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22182-65389/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22182-65389/.minikube/certs/ca-key.pem org=jenkins.functional-329536 san=[127.0.0.1 192.168.39.217 functional-329536 localhost minikube]
I1217 08:23:24.122273 74413 provision.go:177] copyRemoteCerts
I1217 08:23:24.122318 74413 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1217 08:23:24.124765 74413 main.go:143] libmachine: domain functional-329536 has defined MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:24.125131 74413 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c3:28:86", ip: ""} in network mk-functional-329536: {Iface:virbr1 ExpiryTime:2025-12-17 09:21:11 +0000 UTC Type:0 Mac:52:54:00:c3:28:86 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-329536 Clientid:01:52:54:00:c3:28:86}
I1217 08:23:24.125153 74413 main.go:143] libmachine: domain functional-329536 has defined IP address 192.168.39.217 and MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:24.125314 74413 sshutil.go:56] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-65389/.minikube/machines/functional-329536/id_rsa Username:docker}
I1217 08:23:24.217121 74413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-65389/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1217 08:23:24.249535 74413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-65389/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1217 08:23:24.279085 74413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-65389/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1217 08:23:24.314648 74413 provision.go:87] duration metric: took 228.885581ms to configureAuth
I1217 08:23:24.314670 74413 buildroot.go:189] setting minikube options for container-runtime
I1217 08:23:24.314850 74413 config.go:182] Loaded profile config "functional-329536": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1217 08:23:24.317740 74413 main.go:143] libmachine: domain functional-329536 has defined MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:24.318182 74413 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c3:28:86", ip: ""} in network mk-functional-329536: {Iface:virbr1 ExpiryTime:2025-12-17 09:21:11 +0000 UTC Type:0 Mac:52:54:00:c3:28:86 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-329536 Clientid:01:52:54:00:c3:28:86}
I1217 08:23:24.318200 74413 main.go:143] libmachine: domain functional-329536 has defined IP address 192.168.39.217 and MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:24.318395 74413 main.go:143] libmachine: Using SSH client type: native
I1217 08:23:24.318491 74413 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil> [] 0s} 192.168.39.217 22 <nil> <nil>}
I1217 08:23:24.318499 74413 main.go:143] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1217 08:23:24.447733 74413 main.go:143] libmachine: SSH cmd err, output: <nil>: tmpfs
I1217 08:23:24.447746 74413 buildroot.go:70] root file system type: tmpfs
I1217 08:23:24.447853 74413 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I1217 08:23:24.450455 74413 main.go:143] libmachine: domain functional-329536 has defined MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:24.450790 74413 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c3:28:86", ip: ""} in network mk-functional-329536: {Iface:virbr1 ExpiryTime:2025-12-17 09:21:11 +0000 UTC Type:0 Mac:52:54:00:c3:28:86 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-329536 Clientid:01:52:54:00:c3:28:86}
I1217 08:23:24.450803 74413 main.go:143] libmachine: domain functional-329536 has defined IP address 192.168.39.217 and MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:24.450957 74413 main.go:143] libmachine: Using SSH client type: native
I1217 08:23:24.451034 74413 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil> [] 0s} 192.168.39.217 22 <nil> <nil>}
I1217 08:23:24.451072 74413 main.go:143] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
-H fd:// --containerd=/run/containerd/containerd.sock \
-H unix:///var/run/docker.sock \
--default-ulimit=nofile=1048576:1048576 \
--tlsverify \
--tlscacert /etc/docker/ca.pem \
--tlscert /etc/docker/server.pem \
--tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1217 08:23:24.592472 74413 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target minikube-automount.service nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
Wants=network-online.target containerd.service
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=always
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H fd:// --containerd=/run/containerd/containerd.sock -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
I1217 08:23:24.595331 74413 main.go:143] libmachine: domain functional-329536 has defined MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:24.595768 74413 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c3:28:86", ip: ""} in network mk-functional-329536: {Iface:virbr1 ExpiryTime:2025-12-17 09:21:11 +0000 UTC Type:0 Mac:52:54:00:c3:28:86 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-329536 Clientid:01:52:54:00:c3:28:86}
I1217 08:23:24.595785 74413 main.go:143] libmachine: domain functional-329536 has defined IP address 192.168.39.217 and MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:24.595980 74413 main.go:143] libmachine: Using SSH client type: native
I1217 08:23:24.596076 74413 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil> [] 0s} 192.168.39.217 22 <nil> <nil>}
I1217 08:23:24.596112 74413 main.go:143] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1217 08:23:24.719893 74413 main.go:143] libmachine: SSH cmd err, output: <nil>:
I1217 08:23:24.719910 74413 machine.go:97] duration metric: took 1.01793595s to provisionDockerMachine
I1217 08:23:24.719920 74413 start.go:293] postStartSetup for "functional-329536" (driver="kvm2")
I1217 08:23:24.719927 74413 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1217 08:23:24.720003 74413 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1217 08:23:24.722983 74413 main.go:143] libmachine: domain functional-329536 has defined MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:24.723385 74413 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c3:28:86", ip: ""} in network mk-functional-329536: {Iface:virbr1 ExpiryTime:2025-12-17 09:21:11 +0000 UTC Type:0 Mac:52:54:00:c3:28:86 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-329536 Clientid:01:52:54:00:c3:28:86}
I1217 08:23:24.723401 74413 main.go:143] libmachine: domain functional-329536 has defined IP address 192.168.39.217 and MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:24.723621 74413 sshutil.go:56] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-65389/.minikube/machines/functional-329536/id_rsa Username:docker}
I1217 08:23:24.812838 74413 ssh_runner.go:195] Run: cat /etc/os-release
I1217 08:23:24.817450 74413 info.go:137] Remote host: Buildroot 2025.02
I1217 08:23:24.817465 74413 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-65389/.minikube/addons for local assets ...
I1217 08:23:24.817528 74413 filesync.go:126] Scanning /home/jenkins/minikube-integration/22182-65389/.minikube/files for local assets ...
I1217 08:23:24.817594 74413 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-65389/.minikube/files/etc/ssl/certs/692932.pem -> 692932.pem in /etc/ssl/certs
I1217 08:23:24.817657 74413 filesync.go:149] local asset: /home/jenkins/minikube-integration/22182-65389/.minikube/files/etc/test/nested/copy/69293/hosts -> hosts in /etc/test/nested/copy/69293
I1217 08:23:24.817688 74413 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/69293
I1217 08:23:24.829661 74413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-65389/.minikube/files/etc/ssl/certs/692932.pem --> /etc/ssl/certs/692932.pem (1708 bytes)
I1217 08:23:24.859307 74413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-65389/.minikube/files/etc/test/nested/copy/69293/hosts --> /etc/test/nested/copy/69293/hosts (40 bytes)
I1217 08:23:24.889290 74413 start.go:296] duration metric: took 169.354946ms for postStartSetup
I1217 08:23:24.889322 74413 fix.go:56] duration metric: took 1.190533591s for fixHost
I1217 08:23:24.892359 74413 main.go:143] libmachine: domain functional-329536 has defined MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:24.892744 74413 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c3:28:86", ip: ""} in network mk-functional-329536: {Iface:virbr1 ExpiryTime:2025-12-17 09:21:11 +0000 UTC Type:0 Mac:52:54:00:c3:28:86 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-329536 Clientid:01:52:54:00:c3:28:86}
I1217 08:23:24.892760 74413 main.go:143] libmachine: domain functional-329536 has defined IP address 192.168.39.217 and MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:24.892958 74413 main.go:143] libmachine: Using SSH client type: native
I1217 08:23:24.893061 74413 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84f0c0] 0x851d60 <nil> [] 0s} 192.168.39.217 22 <nil> <nil>}
I1217 08:23:24.893067 74413 main.go:143] libmachine: About to run SSH command:
date +%s.%N
I1217 08:23:25.008168 74413 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765959805.001476280
I1217 08:23:25.008181 74413 fix.go:216] guest clock: 1765959805.001476280
I1217 08:23:25.008188 74413 fix.go:229] Guest: 2025-12-17 08:23:25.00147628 +0000 UTC Remote: 2025-12-17 08:23:24.889324753 +0000 UTC m=+1.287581497 (delta=112.151527ms)
I1217 08:23:25.008203 74413 fix.go:200] guest clock delta is within tolerance: 112.151527ms
I1217 08:23:25.008207 74413 start.go:83] releasing machines lock for "functional-329536", held for 1.309427151s
I1217 08:23:25.011558 74413 main.go:143] libmachine: domain functional-329536 has defined MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:25.011931 74413 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c3:28:86", ip: ""} in network mk-functional-329536: {Iface:virbr1 ExpiryTime:2025-12-17 09:21:11 +0000 UTC Type:0 Mac:52:54:00:c3:28:86 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-329536 Clientid:01:52:54:00:c3:28:86}
I1217 08:23:25.011952 74413 main.go:143] libmachine: domain functional-329536 has defined IP address 192.168.39.217 and MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:25.012536 74413 ssh_runner.go:195] Run: cat /version.json
I1217 08:23:25.012619 74413 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1217 08:23:25.015528 74413 main.go:143] libmachine: domain functional-329536 has defined MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:25.015770 74413 main.go:143] libmachine: domain functional-329536 has defined MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:25.015967 74413 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c3:28:86", ip: ""} in network mk-functional-329536: {Iface:virbr1 ExpiryTime:2025-12-17 09:21:11 +0000 UTC Type:0 Mac:52:54:00:c3:28:86 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-329536 Clientid:01:52:54:00:c3:28:86}
I1217 08:23:25.015993 74413 main.go:143] libmachine: domain functional-329536 has defined IP address 192.168.39.217 and MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:25.016192 74413 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c3:28:86", ip: ""} in network mk-functional-329536: {Iface:virbr1 ExpiryTime:2025-12-17 09:21:11 +0000 UTC Type:0 Mac:52:54:00:c3:28:86 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-329536 Clientid:01:52:54:00:c3:28:86}
I1217 08:23:25.016197 74413 sshutil.go:56] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-65389/.minikube/machines/functional-329536/id_rsa Username:docker}
I1217 08:23:25.016212 74413 main.go:143] libmachine: domain functional-329536 has defined IP address 192.168.39.217 and MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:25.016411 74413 sshutil.go:56] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-65389/.minikube/machines/functional-329536/id_rsa Username:docker}
I1217 08:23:25.100821 74413 ssh_runner.go:195] Run: systemctl --version
I1217 08:23:25.124193 74413 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W1217 08:23:25.131042 74413 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I1217 08:23:25.131086 74413 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1217 08:23:25.142605 74413 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I1217 08:23:25.142621 74413 start.go:496] detecting cgroup driver to use...
I1217 08:23:25.142727 74413 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1217 08:23:25.165054 74413 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
I1217 08:23:25.177771 74413 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1217 08:23:25.190522 74413 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1217 08:23:25.190576 74413 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1217 08:23:25.203589 74413 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1217 08:23:25.216652 74413 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1217 08:23:25.230614 74413 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1217 08:23:25.243669 74413 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1217 08:23:25.257629 74413 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1217 08:23:25.270453 74413 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1217 08:23:25.283527 74413 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1217 08:23:25.295903 74413 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1217 08:23:25.306385 74413 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1217 08:23:25.317297 74413 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1217 08:23:25.513719 74413 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1217 08:23:25.566558 74413 start.go:496] detecting cgroup driver to use...
I1217 08:23:25.566652 74413 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I1217 08:23:25.584245 74413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1217 08:23:25.605010 74413 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I1217 08:23:25.635699 74413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I1217 08:23:25.652465 74413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1217 08:23:25.668415 74413 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I1217 08:23:25.692363 74413 ssh_runner.go:195] Run: which cri-dockerd
I1217 08:23:25.696726 74413 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I1217 08:23:25.708577 74413 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
I1217 08:23:25.728695 74413 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I1217 08:23:25.943375 74413 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I1217 08:23:26.161818 74413 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
I1217 08:23:26.161974 74413 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I1217 08:23:26.188455 74413 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
I1217 08:23:26.204823 74413 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1217 08:23:26.400040 74413 ssh_runner.go:195] Run: sudo systemctl restart docker
I1217 08:23:52.309266 74413 ssh_runner.go:235] Completed: sudo systemctl restart docker: (25.909200769s)
I1217 08:23:52.309331 74413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1217 08:23:52.349956 74413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I1217 08:23:52.379820 74413 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
I1217 08:23:52.433826 74413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1217 08:23:52.452147 74413 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I1217 08:23:52.616050 74413 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I1217 08:23:52.769155 74413 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1217 08:23:52.924486 74413 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I1217 08:23:52.960550 74413 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
I1217 08:23:52.978074 74413 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1217 08:23:53.167365 74413 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I1217 08:23:53.297118 74413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I1217 08:23:53.318499 74413 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
I1217 08:23:53.318572 74413 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I1217 08:23:53.324683 74413 start.go:564] Will wait 60s for crictl version
I1217 08:23:53.324749 74413 ssh_runner.go:195] Run: which crictl
I1217 08:23:53.329134 74413 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1217 08:23:53.359659 74413 start.go:580] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 28.5.2
RuntimeApiVersion: v1
I1217 08:23:53.359741 74413 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1217 08:23:53.386422 74413 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I1217 08:23:53.413084 74413 out.go:252] * Preparing Kubernetes v1.34.3 on Docker 28.5.2 ...
I1217 08:23:53.415804 74413 main.go:143] libmachine: domain functional-329536 has defined MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:53.416229 74413 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c3:28:86", ip: ""} in network mk-functional-329536: {Iface:virbr1 ExpiryTime:2025-12-17 09:21:11 +0000 UTC Type:0 Mac:52:54:00:c3:28:86 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-329536 Clientid:01:52:54:00:c3:28:86}
I1217 08:23:53.416249 74413 main.go:143] libmachine: domain functional-329536 has defined IP address 192.168.39.217 and MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:23:53.416428 74413 ssh_runner.go:195] Run: grep 192.168.39.1 host.minikube.internal$ /etc/hosts
I1217 08:23:53.422558 74413 out.go:179] - apiserver.enable-admission-plugins=NamespaceAutoProvision
I1217 08:23:53.424074 74413 kubeadm.go:884] updating cluster {Name:functional-329536 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.3 ClusterName:functional-329536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1217 08:23:53.424203 74413 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime docker
I1217 08:23:53.424245 74413 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1217 08:23:53.459361 74413 docker.go:691] Got preloaded images: -- stdout --
minikube-local-cache-test:functional-329536
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/pause:3.10.1
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/pause:latest
-- /stdout --
I1217 08:23:53.459372 74413 docker.go:621] Images already preloaded, skipping extraction
I1217 08:23:53.459426 74413 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I1217 08:23:53.500709 74413 docker.go:691] Got preloaded images: -- stdout --
minikube-local-cache-test:functional-329536
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/pause:3.10.1
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/pause:latest
-- /stdout --
I1217 08:23:53.500724 74413 cache_images.go:86] Images are preloaded, skipping loading
I1217 08:23:53.500741 74413 kubeadm.go:935] updating node { 192.168.39.217 8441 v1.34.3 docker true true} ...
I1217 08:23:53.500851 74413 kubeadm.go:947] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-329536 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
[Install]
config:
{KubernetesVersion:v1.34.3 ClusterName:functional-329536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1217 08:23:53.500907 74413 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I1217 08:23:53.705730 74413 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
I1217 08:23:53.705761 74413 cni.go:84] Creating CNI manager for ""
I1217 08:23:53.705785 74413 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1217 08:23:53.705799 74413 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
I1217 08:23:53.705818 74413 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8441 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-329536 NodeName:functional-329536 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfi
gOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1217 08:23:53.705971 74413 kubeadm.go:196] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.39.217
bindPort: 8441
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "functional-329536"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.39.217"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceAutoProvision"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8441
etcd:
local:
dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.34.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1217 08:23:53.706039 74413 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
I1217 08:23:53.748500 74413 binaries.go:51] Found k8s binaries, skipping transfer
I1217 08:23:53.748561 74413 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1217 08:23:53.812724 74413 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
I1217 08:23:53.885351 74413 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1217 08:23:54.009179 74413 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2074 bytes)
I1217 08:23:54.104044 74413 ssh_runner.go:195] Run: grep 192.168.39.217 control-plane.minikube.internal$ /etc/hosts
I1217 08:23:54.116049 74413 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1217 08:23:54.534773 74413 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1217 08:23:54.587810 74413 certs.go:69] Setting up /home/jenkins/minikube-integration/22182-65389/.minikube/profiles/functional-329536 for IP: 192.168.39.217
I1217 08:23:54.587824 74413 certs.go:195] generating shared ca certs ...
I1217 08:23:54.587840 74413 certs.go:227] acquiring lock for ca certs: {Name:mk057e3e8f72c5ec2e302b6aa122720386a7ade6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 08:23:54.588018 74413 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22182-65389/.minikube/ca.key
I1217 08:23:54.588053 74413 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22182-65389/.minikube/proxy-client-ca.key
I1217 08:23:54.588059 74413 certs.go:257] generating profile certs ...
I1217 08:23:54.588154 74413 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22182-65389/.minikube/profiles/functional-329536/client.key
I1217 08:23:54.588214 74413 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22182-65389/.minikube/profiles/functional-329536/apiserver.key.4d55c46e
I1217 08:23:54.588264 74413 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22182-65389/.minikube/profiles/functional-329536/proxy-client.key
I1217 08:23:54.588364 74413 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-65389/.minikube/certs/69293.pem (1338 bytes)
W1217 08:23:54.588394 74413 certs.go:480] ignoring /home/jenkins/minikube-integration/22182-65389/.minikube/certs/69293_empty.pem, impossibly tiny 0 bytes
I1217 08:23:54.588400 74413 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-65389/.minikube/certs/ca-key.pem (1679 bytes)
I1217 08:23:54.588426 74413 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-65389/.minikube/certs/ca.pem (1078 bytes)
I1217 08:23:54.588445 74413 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-65389/.minikube/certs/cert.pem (1123 bytes)
I1217 08:23:54.588466 74413 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-65389/.minikube/certs/key.pem (1675 bytes)
I1217 08:23:54.588500 74413 certs.go:484] found cert: /home/jenkins/minikube-integration/22182-65389/.minikube/files/etc/ssl/certs/692932.pem (1708 bytes)
I1217 08:23:54.589224 74413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-65389/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1217 08:23:54.716701 74413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-65389/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1217 08:23:54.807759 74413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-65389/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1217 08:23:54.901070 74413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-65389/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1217 08:23:55.029412 74413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-65389/.minikube/profiles/functional-329536/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1217 08:23:55.145763 74413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-65389/.minikube/profiles/functional-329536/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1217 08:23:55.201144 74413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-65389/.minikube/profiles/functional-329536/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1217 08:23:55.300551 74413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-65389/.minikube/profiles/functional-329536/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1217 08:23:55.347654 74413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-65389/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1217 08:23:55.398042 74413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-65389/.minikube/certs/69293.pem --> /usr/share/ca-certificates/69293.pem (1338 bytes)
I1217 08:23:55.445485 74413 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22182-65389/.minikube/files/etc/ssl/certs/692932.pem --> /usr/share/ca-certificates/692932.pem (1708 bytes)
I1217 08:23:55.491543 74413 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1217 08:23:55.520609 74413 ssh_runner.go:195] Run: openssl version
I1217 08:23:55.530103 74413 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
I1217 08:23:55.547234 74413 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
I1217 08:23:55.560489 74413 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1217 08:23:55.568845 74413 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 08:15 /usr/share/ca-certificates/minikubeCA.pem
I1217 08:23:55.568885 74413 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1217 08:23:55.578873 74413 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
I1217 08:23:55.597171 74413 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/69293.pem
I1217 08:23:55.611781 74413 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/69293.pem /etc/ssl/certs/69293.pem
I1217 08:23:55.631078 74413 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69293.pem
I1217 08:23:55.639159 74413 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 08:20 /usr/share/ca-certificates/69293.pem
I1217 08:23:55.639203 74413 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69293.pem
I1217 08:23:55.647246 74413 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
I1217 08:23:55.663238 74413 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/692932.pem
I1217 08:23:55.678976 74413 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/692932.pem /etc/ssl/certs/692932.pem
I1217 08:23:55.694056 74413 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/692932.pem
I1217 08:23:55.701198 74413 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 08:20 /usr/share/ca-certificates/692932.pem
I1217 08:23:55.701250 74413 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/692932.pem
I1217 08:23:55.710232 74413 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
I1217 08:23:55.725106 74413 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1217 08:23:55.734941 74413 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1217 08:23:55.742931 74413 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1217 08:23:55.753916 74413 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1217 08:23:55.762498 74413 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1217 08:23:55.769349 74413 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1217 08:23:55.776705 74413 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1217 08:23:55.788214 74413 kubeadm.go:401] StartCluster: {Name:functional-329536 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22141/minikube-v1.37.0-1765846775-22141-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.3 ClusterName:functional-329536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1217 08:23:55.788311 74413 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1217 08:23:55.814050 74413 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1217 08:23:55.827271 74413 kubeadm.go:417] found existing configuration files, will attempt cluster restart
I1217 08:23:55.827279 74413 kubeadm.go:598] restartPrimaryControlPlane start ...
I1217 08:23:55.827320 74413 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1217 08:23:55.844044 74413 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1217 08:23:55.844592 74413 kubeconfig.go:125] found "functional-329536" server: "https://192.168.39.217:8441"
I1217 08:23:55.845725 74413 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1217 08:23:55.862049 74413 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
-- stdout --
--- /var/tmp/minikube/kubeadm.yaml
+++ /var/tmp/minikube/kubeadm.yaml.new
@@ -24,7 +24,7 @@
certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
extraArgs:
- name: "enable-admission-plugins"
- value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
+ value: "NamespaceAutoProvision"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
-- /stdout --
I1217 08:23:55.862060 74413 kubeadm.go:1161] stopping kube-system containers ...
I1217 08:23:55.862114 74413 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1217 08:23:55.902498 74413 docker.go:484] Stopping containers: [807f4bcba4bb 0004bfc96ad7 74c03df23410 4bca2ef1e9d9 93cd2777ee7d 3d5c9f3436b4 301eeba19de0 eb1a4a35855b 710e581ba524 93a0698e6352 febaa0fd9044 b1f5f2ea4fc1 c1fd0be78834 ca1d1d7a5cb7 4cbb2c653734 b8fc2fab53fd 9da2a295fd5d d08afcf53ad7 5cf3a833c3cf fbe32713266f 70713fef888a 7ed2298642b8 82c1cf16da0c ee4cbcc49f3a b98b3b93038f 4fb3de98367b ce64b65e2d1f 30ff9a5e2596 dba18e43a978 b1accca3efda df6ebce231fb 1d81e11d5dcb 3e456befca41 183a34260d17 98d23a6eb356 2b8733e11cab]
I1217 08:23:55.902581 74413 ssh_runner.go:195] Run: docker stop 807f4bcba4bb 0004bfc96ad7 74c03df23410 4bca2ef1e9d9 93cd2777ee7d 3d5c9f3436b4 301eeba19de0 eb1a4a35855b 710e581ba524 93a0698e6352 febaa0fd9044 b1f5f2ea4fc1 c1fd0be78834 ca1d1d7a5cb7 4cbb2c653734 b8fc2fab53fd 9da2a295fd5d d08afcf53ad7 5cf3a833c3cf fbe32713266f 70713fef888a 7ed2298642b8 82c1cf16da0c ee4cbcc49f3a b98b3b93038f 4fb3de98367b ce64b65e2d1f 30ff9a5e2596 dba18e43a978 b1accca3efda df6ebce231fb 1d81e11d5dcb 3e456befca41 183a34260d17 98d23a6eb356 2b8733e11cab
I1217 08:23:56.820634 74413 ssh_runner.go:195] Run: sudo systemctl stop kubelet
I1217 08:23:56.915114 74413 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1217 08:23:56.945001 74413 kubeadm.go:158] found existing configuration files:
-rw------- 1 root root 5631 Dec 17 08:21 /etc/kubernetes/admin.conf
-rw------- 1 root root 5642 Dec 17 08:22 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 5674 Dec 17 08:22 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5586 Dec 17 08:23 /etc/kubernetes/scheduler.conf
I1217 08:23:56.945063 74413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
I1217 08:23:56.968998 74413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
I1217 08:23:56.989563 74413 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
stdout:
stderr:
I1217 08:23:56.989639 74413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1217 08:23:57.013346 74413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
I1217 08:23:57.028562 74413 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:
stderr:
I1217 08:23:57.028619 74413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1217 08:23:57.040520 74413 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
I1217 08:23:57.051480 74413 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:
stderr:
I1217 08:23:57.051529 74413 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1217 08:23:57.067206 74413 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1217 08:23:57.082595 74413 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
I1217 08:23:57.137426 74413 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
I1217 08:23:57.911876 74413 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
I1217 08:23:58.184591 74413 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
I1217 08:23:58.257788 74413 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
I1217 08:23:58.345381 74413 api_server.go:52] waiting for apiserver process to appear ...
I1217 08:23:58.345465 74413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1217 08:23:58.846051 74413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1217 08:23:59.345677 74413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1217 08:23:59.845900 74413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1217 08:23:59.884300 74413 api_server.go:72] duration metric: took 1.538916739s to wait for apiserver process to appear ...
I1217 08:23:59.884317 74413 api_server.go:88] waiting for apiserver healthz status ...
I1217 08:23:59.884338 74413 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8441/healthz ...
I1217 08:24:01.908270 74413 api_server.go:279] https://192.168.39.217:8441/healthz returned 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
W1217 08:24:01.908293 74413 api_server.go:103] status: https://192.168.39.217:8441/healthz returned error 403:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
I1217 08:24:01.908332 74413 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8441/healthz ...
I1217 08:24:02.112737 74413 api_server.go:279] https://192.168.39.217:8441/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1217 08:24:02.112759 74413 api_server.go:103] status: https://192.168.39.217:8441/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1217 08:24:02.385242 74413 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8441/healthz ...
I1217 08:24:02.390116 74413 api_server.go:279] https://192.168.39.217:8441/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1217 08:24:02.390135 74413 api_server.go:103] status: https://192.168.39.217:8441/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1217 08:24:02.884787 74413 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8441/healthz ...
I1217 08:24:02.895455 74413 api_server.go:279] https://192.168.39.217:8441/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1217 08:24:02.895471 74413 api_server.go:103] status: https://192.168.39.217:8441/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1217 08:24:03.385261 74413 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8441/healthz ...
I1217 08:24:03.393571 74413 api_server.go:279] https://192.168.39.217:8441/healthz returned 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
W1217 08:24:03.393586 74413 api_server.go:103] status: https://192.168.39.217:8441/healthz returned error 500:
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/storage-object-count-tracker-hook ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/start-system-namespaces-controller ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
[+]poststarthook/start-legacy-token-tracking-controller ok
[+]poststarthook/start-service-ip-repair-controllers ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/start-kubernetes-service-cidr-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-status-local-available-controller ok
[+]poststarthook/apiservice-status-remote-available-controller ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-discovery-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/apiservice-openapiv3-controller ok
healthz check failed
I1217 08:24:03.885350 74413 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8441/healthz ...
I1217 08:24:03.890243 74413 api_server.go:279] https://192.168.39.217:8441/healthz returned 200:
ok
I1217 08:24:03.897025 74413 api_server.go:141] control plane version: v1.34.3
I1217 08:24:03.897041 74413 api_server.go:131] duration metric: took 4.012719592s to wait for apiserver health ...
I1217 08:24:03.897049 74413 cni.go:84] Creating CNI manager for ""
I1217 08:24:03.897058 74413 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1217 08:24:03.898468 74413 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
I1217 08:24:03.899608 74413 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
I1217 08:24:03.916167 74413 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
I1217 08:24:03.955717 74413 system_pods.go:43] waiting for kube-system pods to appear ...
I1217 08:24:03.960826 74413 system_pods.go:59] 7 kube-system pods found
I1217 08:24:03.960863 74413 system_pods.go:61] "coredns-66bc5c9577-5dtt9" [ce6da726-fec3-4f37-a6ed-7ee26bd50283] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1217 08:24:03.960873 74413 system_pods.go:61] "etcd-functional-329536" [7d12fc9a-852b-4a25-a771-e9f0745b2614] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1217 08:24:03.960882 74413 system_pods.go:61] "kube-apiserver-functional-329536" [9ef1eebf-862a-4972-8214-6d19cb7b8a8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1217 08:24:03.960889 74413 system_pods.go:61] "kube-controller-manager-functional-329536" [b29cd03f-9589-4008-8711-14f12ac76595] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1217 08:24:03.960894 74413 system_pods.go:61] "kube-proxy-n5fcf" [15f8bc7c-cad5-4aa8-8e7c-aaa28ab07d2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1217 08:24:03.960898 74413 system_pods.go:61] "kube-scheduler-functional-329536" [6a306e62-8439-4b81-9af2-8ec489414f2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1217 08:24:03.960905 74413 system_pods.go:61] "storage-provisioner" [c3a7d09e-962c-4d79-8390-5c9838190e93] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1217 08:24:03.960921 74413 system_pods.go:74] duration metric: took 5.180623ms to wait for pod list to return data ...
I1217 08:24:03.960932 74413 node_conditions.go:102] verifying NodePressure condition ...
I1217 08:24:03.964749 74413 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1217 08:24:03.964776 74413 node_conditions.go:123] node cpu capacity is 2
I1217 08:24:03.964790 74413 node_conditions.go:105] duration metric: took 3.853444ms to run NodePressure ...
I1217 08:24:03.964851 74413 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
I1217 08:24:04.304806 74413 kubeadm.go:729] waiting for restarted kubelet to initialise ...
I1217 08:24:04.308192 74413 kubeadm.go:744] kubelet initialised
I1217 08:24:04.308203 74413 kubeadm.go:745] duration metric: took 3.383227ms waiting for restarted kubelet to initialise ...
I1217 08:24:04.308227 74413 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1217 08:24:04.326980 74413 ops.go:34] apiserver oom_adj: -16
I1217 08:24:04.326996 74413 kubeadm.go:602] duration metric: took 8.499710792s to restartPrimaryControlPlane
I1217 08:24:04.327008 74413 kubeadm.go:403] duration metric: took 8.538801538s to StartCluster
I1217 08:24:04.327033 74413 settings.go:142] acquiring lock: {Name:mk4705def774f7c5022350e295d39c5edbae951d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 08:24:04.327168 74413 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/22182-65389/kubeconfig
I1217 08:24:04.328301 74413 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22182-65389/kubeconfig: {Name:mkd2059f39ccf8a7d69ae2246a101134af32c535 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1217 08:24:04.328597 74413 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.217 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I1217 08:24:04.328672 74413 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1217 08:24:04.328793 74413 addons.go:70] Setting storage-provisioner=true in profile "functional-329536"
I1217 08:24:04.328816 74413 addons.go:239] Setting addon storage-provisioner=true in "functional-329536"
W1217 08:24:04.328825 74413 addons.go:248] addon storage-provisioner should already be in state true
I1217 08:24:04.328833 74413 addons.go:70] Setting default-storageclass=true in profile "functional-329536"
I1217 08:24:04.328857 74413 host.go:66] Checking if "functional-329536" exists ...
I1217 08:24:04.328866 74413 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-329536"
I1217 08:24:04.328872 74413 config.go:182] Loaded profile config "functional-329536": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.34.3
I1217 08:24:04.329843 74413 out.go:179] * Verifying Kubernetes components...
I1217 08:24:04.331015 74413 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1217 08:24:04.331983 74413 addons.go:239] Setting addon default-storageclass=true in "functional-329536"
W1217 08:24:04.331994 74413 addons.go:248] addon default-storageclass should already be in state true
I1217 08:24:04.332018 74413 host.go:66] Checking if "functional-329536" exists ...
I1217 08:24:04.332931 74413 out.go:179] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1217 08:24:04.333932 74413 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
I1217 08:24:04.333941 74413 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1217 08:24:04.334225 74413 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1217 08:24:04.334234 74413 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1217 08:24:04.337137 74413 main.go:143] libmachine: domain functional-329536 has defined MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:24:04.337195 74413 main.go:143] libmachine: domain functional-329536 has defined MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:24:04.337564 74413 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c3:28:86", ip: ""} in network mk-functional-329536: {Iface:virbr1 ExpiryTime:2025-12-17 09:21:11 +0000 UTC Type:0 Mac:52:54:00:c3:28:86 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-329536 Clientid:01:52:54:00:c3:28:86}
I1217 08:24:04.337581 74413 main.go:143] libmachine: domain functional-329536 has defined IP address 192.168.39.217 and MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:24:04.337661 74413 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c3:28:86", ip: ""} in network mk-functional-329536: {Iface:virbr1 ExpiryTime:2025-12-17 09:21:11 +0000 UTC Type:0 Mac:52:54:00:c3:28:86 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:functional-329536 Clientid:01:52:54:00:c3:28:86}
I1217 08:24:04.337691 74413 main.go:143] libmachine: domain functional-329536 has defined IP address 192.168.39.217 and MAC address 52:54:00:c3:28:86 in network mk-functional-329536
I1217 08:24:04.337732 74413 sshutil.go:56] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-65389/.minikube/machines/functional-329536/id_rsa Username:docker}
I1217 08:24:04.337997 74413 sshutil.go:56] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22182-65389/.minikube/machines/functional-329536/id_rsa Username:docker}
I1217 08:24:04.554905 74413 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1217 08:24:04.576325 74413 node_ready.go:35] waiting up to 6m0s for node "functional-329536" to be "Ready" ...
I1217 08:24:04.579013 74413 node_ready.go:49] node "functional-329536" is "Ready"
I1217 08:24:04.579026 74413 node_ready.go:38] duration metric: took 2.661819ms for node "functional-329536" to be "Ready" ...
I1217 08:24:04.579040 74413 api_server.go:52] waiting for apiserver process to appear ...
I1217 08:24:04.579107 74413 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1217 08:24:04.598007 74413 api_server.go:72] duration metric: took 269.376323ms to wait for apiserver process to appear ...
I1217 08:24:04.598021 74413 api_server.go:88] waiting for apiserver healthz status ...
I1217 08:24:04.598036 74413 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8441/healthz ...
I1217 08:24:04.603316 74413 api_server.go:279] https://192.168.39.217:8441/healthz returned 200:
ok
I1217 08:24:04.604211 74413 api_server.go:141] control plane version: v1.34.3
I1217 08:24:04.604228 74413 api_server.go:131] duration metric: took 6.20113ms to wait for apiserver health ...
I1217 08:24:04.604237 74413 system_pods.go:43] waiting for kube-system pods to appear ...
I1217 08:24:04.608174 74413 system_pods.go:59] 7 kube-system pods found
I1217 08:24:04.608191 74413 system_pods.go:61] "coredns-66bc5c9577-5dtt9" [ce6da726-fec3-4f37-a6ed-7ee26bd50283] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1217 08:24:04.608197 74413 system_pods.go:61] "etcd-functional-329536" [7d12fc9a-852b-4a25-a771-e9f0745b2614] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1217 08:24:04.608203 74413 system_pods.go:61] "kube-apiserver-functional-329536" [9ef1eebf-862a-4972-8214-6d19cb7b8a8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1217 08:24:04.608207 74413 system_pods.go:61] "kube-controller-manager-functional-329536" [b29cd03f-9589-4008-8711-14f12ac76595] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1217 08:24:04.608211 74413 system_pods.go:61] "kube-proxy-n5fcf" [15f8bc7c-cad5-4aa8-8e7c-aaa28ab07d2a] Running
I1217 08:24:04.608216 74413 system_pods.go:61] "kube-scheduler-functional-329536" [6a306e62-8439-4b81-9af2-8ec489414f2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1217 08:24:04.608218 74413 system_pods.go:61] "storage-provisioner" [c3a7d09e-962c-4d79-8390-5c9838190e93] Running
I1217 08:24:04.608222 74413 system_pods.go:74] duration metric: took 3.981256ms to wait for pod list to return data ...
I1217 08:24:04.608227 74413 default_sa.go:34] waiting for default service account to be created ...
I1217 08:24:04.610587 74413 default_sa.go:45] found service account: "default"
I1217 08:24:04.610596 74413 default_sa.go:55] duration metric: took 2.364973ms for default service account to be created ...
I1217 08:24:04.610603 74413 system_pods.go:116] waiting for k8s-apps to be running ...
I1217 08:24:04.613390 74413 system_pods.go:86] 7 kube-system pods found
I1217 08:24:04.613406 74413 system_pods.go:89] "coredns-66bc5c9577-5dtt9" [ce6da726-fec3-4f37-a6ed-7ee26bd50283] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1217 08:24:04.613411 74413 system_pods.go:89] "etcd-functional-329536" [7d12fc9a-852b-4a25-a771-e9f0745b2614] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
I1217 08:24:04.613417 74413 system_pods.go:89] "kube-apiserver-functional-329536" [9ef1eebf-862a-4972-8214-6d19cb7b8a8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
I1217 08:24:04.613422 74413 system_pods.go:89] "kube-controller-manager-functional-329536" [b29cd03f-9589-4008-8711-14f12ac76595] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
I1217 08:24:04.613425 74413 system_pods.go:89] "kube-proxy-n5fcf" [15f8bc7c-cad5-4aa8-8e7c-aaa28ab07d2a] Running
I1217 08:24:04.613429 74413 system_pods.go:89] "kube-scheduler-functional-329536" [6a306e62-8439-4b81-9af2-8ec489414f2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
I1217 08:24:04.613431 74413 system_pods.go:89] "storage-provisioner" [c3a7d09e-962c-4d79-8390-5c9838190e93] Running
I1217 08:24:04.613436 74413 system_pods.go:126] duration metric: took 2.830024ms to wait for k8s-apps to be running ...
I1217 08:24:04.613441 74413 system_svc.go:44] waiting for kubelet service to be running ....
I1217 08:24:04.613482 74413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
I1217 08:24:04.630862 74413 system_svc.go:56] duration metric: took 17.412436ms WaitForService to wait for kubelet
I1217 08:24:04.630875 74413 kubeadm.go:587] duration metric: took 302.247666ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1217 08:24:04.630891 74413 node_conditions.go:102] verifying NodePressure condition ...
I1217 08:24:04.634260 74413 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
I1217 08:24:04.634270 74413 node_conditions.go:123] node cpu capacity is 2
I1217 08:24:04.634279 74413 node_conditions.go:105] duration metric: took 3.385318ms to run NodePressure ...
I1217 08:24:04.634289 74413 start.go:242] waiting for startup goroutines ...
I1217 08:24:04.738396 74413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1217 08:24:04.743729 74413 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1217 08:24:05.460524 74413 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
I1217 08:24:05.461433 74413 addons.go:530] duration metric: took 1.132773259s for enable addons: enabled=[storage-provisioner default-storageclass]
I1217 08:24:05.461461 74413 start.go:247] waiting for cluster config update ...
I1217 08:24:05.461471 74413 start.go:256] writing updated cluster config ...
I1217 08:24:05.461701 74413 ssh_runner.go:195] Run: rm -f paused
I1217 08:24:05.469670 74413 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1217 08:24:05.475374 74413 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5dtt9" in "kube-system" namespace to be "Ready" or be gone ...
W1217 08:24:07.483385 74413 pod_ready.go:104] pod "coredns-66bc5c9577-5dtt9" is not "Ready", error: <nil>
I1217 08:24:08.485703 74413 pod_ready.go:94] pod "coredns-66bc5c9577-5dtt9" is "Ready"
I1217 08:24:08.485720 74413 pod_ready.go:86] duration metric: took 3.010334094s for pod "coredns-66bc5c9577-5dtt9" in "kube-system" namespace to be "Ready" or be gone ...
I1217 08:24:08.492114 74413 pod_ready.go:83] waiting for pod "etcd-functional-329536" in "kube-system" namespace to be "Ready" or be gone ...
W1217 08:24:10.498665 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:24:12.998727 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:24:15.498701 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:24:17.998490 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:24:20.506330 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:24:22.998272 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:24:24.998739 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:24:27.497555 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:24:29.498774 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:24:31.999187 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:24:34.498177 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:24:36.498864 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:24:38.998044 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:24:41.000026 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:24:43.497912 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:24:45.498870 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:24:47.999587 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:24:50.498980 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:24:52.499127 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:24:54.998814 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:24:57.497994 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:24:59.499221 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:25:01.499322 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:25:03.998343 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:25:06.498209 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:25:08.997838 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:25:11.497739 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:25:13.499576 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:25:15.998300 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:25:18.498778 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:25:21.003466 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:25:23.498550 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:25:25.498738 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:25:27.998886 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:25:30.497841 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:25:32.498714 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:25:34.499923 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:25:36.998004 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:25:39.497833 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:25:41.498454 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:25:43.498995 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:25:45.998150 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:25:47.999196 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:25:50.497638 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:25:52.998488 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:25:54.998878 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:25:57.497888 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:25:59.498287 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:26:01.498801 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:26:03.998289 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:26:05.998938 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:26:08.501280 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:26:10.999169 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:26:12.999639 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:26:15.498671 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:26:17.999236 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:26:20.498364 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:26:22.998473 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:26:24.999634 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:26:27.497800 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:26:29.498655 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:26:32.000193 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:26:34.497184 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:26:36.498890 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:26:38.499941 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:26:40.999203 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:26:43.499762 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:26:45.998832 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:26:48.497366 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:26:50.499532 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:26:52.998497 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:26:55.497696 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:26:57.498397 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:26:59.498653 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:27:01.998986 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:27:04.498647 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:27:06.498837 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:27:08.999216 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:27:11.497541 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:27:13.998451 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:27:16.497991 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:27:18.499376 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:27:20.998224 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:27:23.497251 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:27:25.498021 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:27:27.499037 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:27:29.499222 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:27:31.997937 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:27:33.999371 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:27:36.497862 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:27:38.498542 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:27:40.998832 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:27:42.998965 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:27:45.498365 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:27:47.998232 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:27:49.998430 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:27:52.499232 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:27:55.000679 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:27:57.497204 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:27:59.497829 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:28:01.498424 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
W1217 08:28:03.498506 74413 pod_ready.go:104] pod "etcd-functional-329536" is not "Ready", error: <nil>
I1217 08:28:05.469957 74413 pod_ready.go:86] duration metric: took 3m56.977818088s for pod "etcd-functional-329536" in "kube-system" namespace to be "Ready" or be gone ...
W1217 08:28:05.469980 74413 pod_ready.go:65] not all pods in "kube-system" namespace with "component=etcd" label are "Ready", will retry: waitPodCondition: context deadline exceeded
I1217 08:28:05.469995 74413 pod_ready.go:40] duration metric: took 4m0.000306277s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
I1217 08:28:05.471790 74413 out.go:203]
W1217 08:28:05.473113 74413 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
I1217 08:28:05.474315 74413 out.go:203]
==> Docker <==
Dec 17 08:23:56 functional-329536 dockerd[7881]: time="2025-12-17T08:23:56.242985244Z" level=info msg="ignoring event" container=3d5c9f3436b4aba6d22ad69be6b2ce310b9aa39b7f09e271fe8efece0a210170 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 17 08:23:56 functional-329536 dockerd[7881]: time="2025-12-17T08:23:56.255382265Z" level=info msg="ignoring event" container=febaa0fd9044c178d19cfe3b3b384b072c57d8bb833b8d85c285ec387370deb6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 17 08:23:56 functional-329536 dockerd[7881]: time="2025-12-17T08:23:56.255437736Z" level=info msg="ignoring event" container=301eeba19de02b45c1e5c06d18f64165cc033d8a4c5b22873c522b4ffa32f6e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 17 08:23:56 functional-329536 dockerd[7881]: time="2025-12-17T08:23:56.255462273Z" level=info msg="ignoring event" container=eb1a4a35855bf587fa0337358e5de1afb66df542b047ca5acf876c906eecdfd4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 17 08:23:56 functional-329536 dockerd[7881]: time="2025-12-17T08:23:56.255473805Z" level=info msg="ignoring event" container=93cd2777ee7d198050c7cebd5309490085c8604863d76302413a82870de55441 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 17 08:23:56 functional-329536 cri-dockerd[8804]: time="2025-12-17T08:23:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2ad8aec3a63c1fdc05d70deae5f3b6aa171bb5060378d13734eec5844fddb81b/resolv.conf as [nameserver 192.168.122.1]"
Dec 17 08:23:56 functional-329536 cri-dockerd[8804]: W1217 08:23:56.916627 8804 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
Dec 17 08:23:56 functional-329536 cri-dockerd[8804]: time="2025-12-17T08:23:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1f97045a122dc0517fb329b11527a0019e73515d5e0ff9a89a560d97fd14cf9b/resolv.conf as [nameserver 192.168.122.1]"
Dec 17 08:23:56 functional-329536 cri-dockerd[8804]: W1217 08:23:56.970314 8804 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
Dec 17 08:23:57 functional-329536 cri-dockerd[8804]: time="2025-12-17T08:23:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d3ed1f3c9c45a79f1af5f18d63c3e9309bee8284f79e1a20787e0b25aa618cf5/resolv.conf as [nameserver 192.168.122.1]"
Dec 17 08:23:57 functional-329536 cri-dockerd[8804]: W1217 08:23:57.003758 8804 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
Dec 17 08:23:57 functional-329536 cri-dockerd[8804]: time="2025-12-17T08:23:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c62d8857fc72966aff7b541828dd04392da7e5122fb756fb3d1f2d832c7e0af2/resolv.conf as [nameserver 192.168.122.1]"
Dec 17 08:23:57 functional-329536 cri-dockerd[8804]: W1217 08:23:57.084239 8804 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
Dec 17 08:23:58 functional-329536 cri-dockerd[8804]: time="2025-12-17T08:23:58Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-swkgf_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"98d23a6eb356a73f075476163e4ddc97ba5b3ed1cb0a3b39173c418003c0d4db\""
Dec 17 08:23:58 functional-329536 cri-dockerd[8804]: time="2025-12-17T08:23:58Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5dtt9_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"93cd2777ee7d198050c7cebd5309490085c8604863d76302413a82870de55441\""
Dec 17 08:23:58 functional-329536 dockerd[7881]: time="2025-12-17T08:23:58.997182045Z" level=info msg="ignoring event" container=e62da130feb050266940c633801a7644bde117c015ac23f4f923ba56fe714fc5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 17 08:23:59 functional-329536 cri-dockerd[8804]: time="2025-12-17T08:23:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f336be2ebd9d5abb59eae8fadaccb3fe79c2cc5cb6028e315a004f4a5171cfde/resolv.conf as [nameserver 192.168.122.1]"
Dec 17 08:23:59 functional-329536 cri-dockerd[8804]: time="2025-12-17T08:23:59Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5dtt9_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"93cd2777ee7d198050c7cebd5309490085c8604863d76302413a82870de55441\""
Dec 17 08:24:02 functional-329536 cri-dockerd[8804]: time="2025-12-17T08:24:02Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
Dec 17 08:24:02 functional-329536 cri-dockerd[8804]: time="2025-12-17T08:24:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9b0fb36cb6f8afd4bc0ddcf3d9761f6419c2b33ded0e3a5e184186a668bc4afd/resolv.conf as [nameserver 192.168.122.1]"
Dec 17 08:24:02 functional-329536 cri-dockerd[8804]: time="2025-12-17T08:24:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/751a253cbdcb6d94c66c454079d011af1b4e069209e0b4654549c9202ab270bd/resolv.conf as [nameserver 192.168.122.1]"
Dec 17 08:24:20 functional-329536 dockerd[7881]: time="2025-12-17T08:24:20.527357418Z" level=info msg="ignoring event" container=31639f8ac15e9f5471c834ed04aa74b746041f8b62c428fc4a9b4900c713bad4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Dec 17 08:25:08 functional-329536 cri-dockerd[8804]: time="2025-12-17T08:25:08Z" level=error msg="error getting RW layer size for container ID '70713fef888aacdf3f60925a526f2ad1a9c4df3c05f5e010f9e112733694b6f0': Error response from daemon: No such container: 70713fef888aacdf3f60925a526f2ad1a9c4df3c05f5e010f9e112733694b6f0"
Dec 17 08:25:08 functional-329536 cri-dockerd[8804]: time="2025-12-17T08:25:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '70713fef888aacdf3f60925a526f2ad1a9c4df3c05f5e010f9e112733694b6f0'"
Dec 17 08:27:05 functional-329536 dockerd[7881]: time="2025-12-17T08:27:05.540230351Z" level=info msg="ignoring event" container=54e4d103142975fff152cd41b79b3b231bce2ad704468858772080411c050765 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
54e4d10314297 a3e246e9556e9 About a minute ago Exited etcd 8 2ad8aec3a63c1 etcd-functional-329536 kube-system
7055f87147c7b 52546a367cc9e 4 minutes ago Running coredns 2 751a253cbdcb6 coredns-66bc5c9577-5dtt9 kube-system
17f4fa69d9f8c 6e38f40d628db 4 minutes ago Running storage-provisioner 5 9b0fb36cb6f8a storage-provisioner kube-system
a30d11883e885 36eef8e07bdd6 4 minutes ago Running kube-proxy 3 1f97045a122dc kube-proxy-n5fcf kube-system
0b7d4cebea902 aa27095f56193 4 minutes ago Running kube-apiserver 0 f336be2ebd9d5 kube-apiserver-functional-329536 kube-system
65f31683aebe1 5826b25d990d7 4 minutes ago Running kube-controller-manager 4 d3ed1f3c9c45a kube-controller-manager-functional-329536 kube-system
f760293b347ab aec12dadf56dd 4 minutes ago Running kube-scheduler 4 c62d8857fc729 kube-scheduler-functional-329536 kube-system
807f4bcba4bb2 5826b25d990d7 4 minutes ago Exited kube-controller-manager 3 3d5c9f3436b4a kube-controller-manager-functional-329536 kube-system
0004bfc96ad77 aec12dadf56dd 4 minutes ago Exited kube-scheduler 3 301eeba19de02 kube-scheduler-functional-329536 kube-system
74c03df234105 36eef8e07bdd6 4 minutes ago Exited kube-proxy 2 710e581ba5241 kube-proxy-n5fcf kube-system
ca1d1d7a5cb78 6e38f40d628db 4 minutes ago Exited storage-provisioner 4 b98b3b93038fb storage-provisioner kube-system
4cbb2c6537343 52546a367cc9e 5 minutes ago Exited coredns 1 9da2a295fd5d8 coredns-66bc5c9577-5dtt9 kube-system
==> coredns [4cbb2c653734] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
[ERROR] plugin/kubernetes: Unhandled Error
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
.:53
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:44081 - 3808 "HINFO IN 1078641301062948925.275007994535251375. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.016846492s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
==> coredns [7055f87147c7] <==
maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
.:53
[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
CoreDNS-1.12.1
linux/amd64, go1.24.1, 707c7c1
[INFO] 127.0.0.1:58478 - 56536 "HINFO IN 5005491868207586918.2331593304662625289. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.041184959s
==> describe nodes <==
Name: functional-329536
Roles: control-plane
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=functional-329536
kubernetes.io/os=linux
minikube.k8s.io/commit=abbf4267980db3e5fd05c132e54d55cbf2373144
minikube.k8s.io/name=functional-329536
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_12_17T08_21_38_0700
minikube.k8s.io/version=v1.37.0
node-role.kubernetes.io/control-plane=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 17 Dec 2025 08:21:35 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: functional-329536
AcquireTime: <unset>
RenewTime: Wed, 17 Dec 2025 08:27:57 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 17 Dec 2025 08:24:02 +0000 Wed, 17 Dec 2025 08:21:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 17 Dec 2025 08:24:02 +0000 Wed, 17 Dec 2025 08:21:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 17 Dec 2025 08:24:02 +0000 Wed, 17 Dec 2025 08:21:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 17 Dec 2025 08:24:02 +0000 Wed, 17 Dec 2025 08:21:40 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.39.217
Hostname: functional-329536
Capacity:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001784Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17734596Ki
hugepages-2Mi: 0
memory: 4001784Ki
pods: 110
System Info:
Machine ID: 1063413b76234f208cf5b3b961cc9bec
System UUID: 1063413b-7623-4f20-8cf5-b3b961cc9bec
Boot ID: fc9cf610-94ab-4ccc-8352-03a203b5b17b
Kernel Version: 6.6.95
OS Image: Buildroot 2025.02
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://28.5.2
Kubelet Version: v1.34.3
Kube-Proxy Version:
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-66bc5c9577-5dtt9 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 6m23s
kube-system etcd-functional-329536 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 6m28s
kube-system kube-apiserver-functional-329536 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4m4s
kube-system kube-controller-manager-functional-329536 200m (10%) 0 (0%) 0 (0%) 0 (0%) 6m28s
kube-system kube-proxy-n5fcf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m23s
kube-system kube-scheduler-functional-329536 100m (5%) 0 (0%) 0 (0%) 0 (0%) 6m28s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m22s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%) 0 (0%)
memory 170Mi (4%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m2s kube-proxy
Normal Starting 5m kube-proxy
Normal Starting 6m21s kube-proxy
Normal Starting 6m29s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 6m28s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 6m28s kubelet Node functional-329536 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m28s kubelet Node functional-329536 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m28s kubelet Node functional-329536 status is now: NodeHasSufficientPID
Normal NodeReady 6m26s kubelet Node functional-329536 status is now: NodeReady
Normal RegisteredNode 6m24s node-controller Node functional-329536 event: Registered Node functional-329536 in Controller
Warning ContainerGCFailed 5m28s kubelet rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Normal Starting 5m6s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 5m6s (x8 over 5m6s) kubelet Node functional-329536 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m6s (x8 over 5m6s) kubelet Node functional-329536 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m6s (x7 over 5m6s) kubelet Node functional-329536 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 5m6s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 4m59s node-controller Node functional-329536 event: Registered Node functional-329536 in Controller
Normal Starting 4m8s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m8s (x8 over 4m8s) kubelet Node functional-329536 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m8s (x8 over 4m8s) kubelet Node functional-329536 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m8s (x7 over 4m8s) kubelet Node functional-329536 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 4m8s kubelet Updated Node Allocatable limit across pods
Normal RegisteredNode 4m1s node-controller Node functional-329536 event: Registered Node functional-329536 in Controller
==> dmesg <==
[ +0.111151] kauditd_printk_skb: 373 callbacks suppressed
[ +0.102478] kauditd_printk_skb: 205 callbacks suppressed
[ +0.142678] kauditd_printk_skb: 165 callbacks suppressed
[ +0.638876] kauditd_printk_skb: 19 callbacks suppressed
[ +10.534105] kauditd_printk_skb: 276 callbacks suppressed
[Dec17 08:22] kauditd_printk_skb: 16 callbacks suppressed
[ +7.047883] kauditd_printk_skb: 6 callbacks suppressed
[ +15.163112] kauditd_printk_skb: 18 callbacks suppressed
[ +5.480578] kauditd_printk_skb: 28 callbacks suppressed
[ +0.009861] kauditd_printk_skb: 14 callbacks suppressed
[ +3.560052] kauditd_printk_skb: 436 callbacks suppressed
[Dec17 08:23] kauditd_printk_skb: 73 callbacks suppressed
[ +6.743269] kauditd_printk_skb: 99 callbacks suppressed
[ +0.163689] kauditd_printk_skb: 11 callbacks suppressed
[ +0.013307] kauditd_printk_skb: 12 callbacks suppressed
[ +0.352214] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +0.000004] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
[ +5.116706] kauditd_printk_skb: 22 callbacks suppressed
[ +0.017169] kauditd_printk_skb: 28 callbacks suppressed
[ +0.585702] kauditd_printk_skb: 430 callbacks suppressed
[Dec17 08:24] kauditd_printk_skb: 103 callbacks suppressed
[ +3.626623] kauditd_printk_skb: 104 callbacks suppressed
[ +28.996843] kauditd_printk_skb: 28 callbacks suppressed
[Dec17 08:25] kauditd_printk_skb: 6 callbacks suppressed
[Dec17 08:27] kauditd_printk_skb: 6 callbacks suppressed
==> etcd [54e4d1031429] <==
{"level":"warn","ts":"2025-12-17T08:27:05.517810Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
{"level":"warn","ts":"2025-12-17T08:27:05.518064Z","caller":"etcdmain/config.go:270","msg":"--snapshot-count is deprecated in 3.6 and will be decommissioned in 3.7."}
{"level":"info","ts":"2025-12-17T08:27:05.518099Z","caller":"etcdmain/etcd.go:64","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.217:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--feature-gates=InitialCorruptCheck=true","--initial-advertise-peer-urls=https://192.168.39.217:2380","--initial-cluster=functional-329536=https://192.168.39.217:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.217:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.217:2380","--name=functional-329536","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--watch-progress-notify-i
nterval=5s"]}
{"level":"info","ts":"2025-12-17T08:27:05.518359Z","caller":"etcdmain/etcd.go:107","msg":"server has already been initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
{"level":"warn","ts":"2025-12-17T08:27:05.518422Z","caller":"embed/config.go:1209","msg":"Running http and grpc server on single port. This is not recommended for production."}
{"level":"info","ts":"2025-12-17T08:27:05.518447Z","caller":"embed/etcd.go:138","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.217:2380"]}
{"level":"info","ts":"2025-12-17T08:27:05.518485Z","caller":"embed/etcd.go:544","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"error","ts":"2025-12-17T08:27:05.523903Z","caller":"embed/etcd.go:586","msg":"creating peer listener failed","error":"listen tcp 192.168.39.217:2380: bind: address already in use","stacktrace":"go.etcd.io/etcd/server/v3/embed.configurePeerListeners\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:586\ngo.etcd.io/etcd/server/v3/embed.StartEtcd\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:142\ngo.etcd.io/etcd/server/v3/etcdmain.startEtcd\n\tgo.etcd.io/etcd/server/v3/etcdmain/etcd.go:207\ngo.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\tgo.etcd.io/etcd/server/v3/etcdmain/etcd.go:114\ngo.etcd.io/etcd/server/v3/etcdmain.Main\n\tgo.etcd.io/etcd/server/v3/etcdmain/main.go:40\nmain.main\n\tgo.etcd.io/etcd/server/v3/main.go:31\nruntime.main\n\truntime/proc.go:283"}
{"level":"info","ts":"2025-12-17T08:27:05.524135Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-329536","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"]}
{"level":"info","ts":"2025-12-17T08:27:05.524246Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-329536","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.217:2380"],"advertise-client-urls":["https://192.168.39.217:2379"]}
{"level":"fatal","ts":"2025-12-17T08:27:05.524407Z","caller":"etcdmain/etcd.go:183","msg":"discovery failed","error":"listen tcp 192.168.39.217:2380: bind: address already in use","stacktrace":"go.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\tgo.etcd.io/etcd/server/v3/etcdmain/etcd.go:183\ngo.etcd.io/etcd/server/v3/etcdmain.Main\n\tgo.etcd.io/etcd/server/v3/etcdmain/main.go:40\nmain.main\n\tgo.etcd.io/etcd/server/v3/main.go:31\nruntime.main\n\truntime/proc.go:283"}
==> kernel <==
08:28:06 up 7 min, 0 users, load average: 0.81, 0.55, 0.29
Linux functional-329536 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Dec 16 03:41:16 UTC 2025 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2025.02"
==> kube-apiserver [0b7d4cebea90] <==
I1217 08:24:01.974836 1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
I1217 08:24:01.975040 1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
I1217 08:24:01.978674 1 shared_informer.go:356] "Caches are synced" controller="configmaps"
I1217 08:24:01.978842 1 handler_discovery.go:451] Starting ResourceDiscoveryManager
I1217 08:24:01.985819 1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
I1217 08:24:01.997163 1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
I1217 08:24:02.065846 1 cache.go:39] Caches are synced for LocalAvailability controller
I1217 08:24:02.072074 1 apf_controller.go:382] Running API Priority and Fairness config worker
I1217 08:24:02.072110 1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
I1217 08:24:02.072615 1 cache.go:39] Caches are synced for RemoteAvailability controller
I1217 08:24:02.073807 1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
I1217 08:24:02.074089 1 aggregator.go:171] initial CRD sync complete...
I1217 08:24:02.074117 1 autoregister_controller.go:144] Starting autoregister controller
I1217 08:24:02.074122 1 cache.go:32] Waiting for caches to sync for autoregister controller
I1217 08:24:02.074127 1 cache.go:39] Caches are synced for autoregister controller
I1217 08:24:02.407333 1 controller.go:667] quota admission added evaluator for: serviceaccounts
I1217 08:24:02.888820 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
W1217 08:24:03.421948 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.217]
I1217 08:24:03.424927 1 controller.go:667] quota admission added evaluator for: endpoints
I1217 08:24:03.443202 1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1217 08:24:04.215388 1 controller.go:667] quota admission added evaluator for: deployments.apps
I1217 08:24:04.255452 1 controller.go:667] quota admission added evaluator for: daemonsets.apps
I1217 08:24:04.281024 1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1217 08:24:04.288095 1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1217 08:24:05.613863 1 controller.go:667] quota admission added evaluator for: replicasets.apps
==> kube-controller-manager [65f31683aebe] <==
I1217 08:24:05.358317 1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
I1217 08:24:05.359395 1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
I1217 08:24:05.359531 1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
I1217 08:24:05.360602 1 shared_informer.go:356] "Caches are synced" controller="cronjob"
I1217 08:24:05.360682 1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
I1217 08:24:05.360845 1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
I1217 08:24:05.364500 1 shared_informer.go:356] "Caches are synced" controller="taint"
I1217 08:24:05.364601 1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
I1217 08:24:05.369034 1 shared_informer.go:356] "Caches are synced" controller="HPA"
I1217 08:24:05.370698 1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
I1217 08:24:05.371753 1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-329536"
I1217 08:24:05.371820 1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
I1217 08:24:05.373914 1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
I1217 08:24:05.375872 1 shared_informer.go:356] "Caches are synced" controller="GC"
I1217 08:24:05.376119 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1217 08:24:05.383694 1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
I1217 08:24:05.384713 1 shared_informer.go:356] "Caches are synced" controller="node"
I1217 08:24:05.384785 1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
I1217 08:24:05.384820 1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
I1217 08:24:05.384825 1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
I1217 08:24:05.384830 1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
I1217 08:24:05.410529 1 shared_informer.go:356] "Caches are synced" controller="resource quota"
I1217 08:24:05.415873 1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
I1217 08:24:05.415904 1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
I1217 08:24:05.415911 1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
==> kube-controller-manager [807f4bcba4bb] <==
I1217 08:23:55.873175 1 serving.go:386] Generated self-signed cert in-memory
==> kube-proxy [74c03df23410] <==
I1217 08:23:55.063879 1 server_linux.go:53] "Using iptables proxy"
I1217 08:23:55.146300 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
E1217 08:23:55.147959 1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-329536&limit=500&resourceVersion=0\": dial tcp 192.168.39.217:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
==> kube-proxy [a30d11883e88] <==
I1217 08:24:03.385771 1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
I1217 08:24:03.486153 1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
I1217 08:24:03.486753 1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.217"]
E1217 08:24:03.486876 1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
I1217 08:24:03.526845 1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
>
I1217 08:24:03.526915 1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
I1217 08:24:03.526943 1 server_linux.go:132] "Using iptables Proxier"
I1217 08:24:03.536809 1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
I1217 08:24:03.537302 1 server.go:527] "Version info" version="v1.34.3"
I1217 08:24:03.537329 1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1217 08:24:03.542303 1 config.go:200] "Starting service config controller"
I1217 08:24:03.542509 1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
I1217 08:24:03.542530 1 config.go:106] "Starting endpoint slice config controller"
I1217 08:24:03.542538 1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
I1217 08:24:03.542825 1 config.go:403] "Starting serviceCIDR config controller"
I1217 08:24:03.542837 1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
I1217 08:24:03.544201 1 config.go:309] "Starting node config controller"
I1217 08:24:03.544231 1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
I1217 08:24:03.544238 1 shared_informer.go:356] "Caches are synced" controller="node config"
I1217 08:24:03.642886 1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
I1217 08:24:03.642992 1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
I1217 08:24:03.643170 1 shared_informer.go:356] "Caches are synced" controller="service config"
==> kube-scheduler [0004bfc96ad7] <==
I1217 08:23:56.438904 1 serving.go:386] Generated self-signed cert in-memory
==> kube-scheduler [f760293b347a] <==
I1217 08:24:00.494882 1 serving.go:386] Generated self-signed cert in-memory
W1217 08:24:01.917343 1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1217 08:24:01.917689 1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1217 08:24:01.918082 1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
W1217 08:24:01.918207 1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1217 08:24:01.983015 1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
I1217 08:24:01.983049 1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1217 08:24:01.992179 1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
I1217 08:24:01.995578 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1217 08:24:01.995615 1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1217 08:24:02.000683 1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
I1217 08:24:02.095988 1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
==> kubelet <==
Dec 17 08:26:15 functional-329536 kubelet[10330]: E1217 08:26:15.320447 10330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=etcd pod=etcd-functional-329536_kube-system(9766e544c4621ecf5e6fa3eaec9bb367)\"" pod="kube-system/etcd-functional-329536" podUID="9766e544c4621ecf5e6fa3eaec9bb367"
Dec 17 08:26:27 functional-329536 kubelet[10330]: I1217 08:26:27.320248 10330 scope.go:117] "RemoveContainer" containerID="1af4bfde821bf998c6b1ed31ef3aa3414809ad0addc669af35bb86ca558c8d40"
Dec 17 08:26:27 functional-329536 kubelet[10330]: E1217 08:26:27.320992 10330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=etcd pod=etcd-functional-329536_kube-system(9766e544c4621ecf5e6fa3eaec9bb367)\"" pod="kube-system/etcd-functional-329536" podUID="9766e544c4621ecf5e6fa3eaec9bb367"
Dec 17 08:26:40 functional-329536 kubelet[10330]: I1217 08:26:40.319747 10330 scope.go:117] "RemoveContainer" containerID="1af4bfde821bf998c6b1ed31ef3aa3414809ad0addc669af35bb86ca558c8d40"
Dec 17 08:26:40 functional-329536 kubelet[10330]: E1217 08:26:40.319892 10330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=etcd pod=etcd-functional-329536_kube-system(9766e544c4621ecf5e6fa3eaec9bb367)\"" pod="kube-system/etcd-functional-329536" podUID="9766e544c4621ecf5e6fa3eaec9bb367"
Dec 17 08:26:51 functional-329536 kubelet[10330]: I1217 08:26:51.319866 10330 scope.go:117] "RemoveContainer" containerID="1af4bfde821bf998c6b1ed31ef3aa3414809ad0addc669af35bb86ca558c8d40"
Dec 17 08:26:51 functional-329536 kubelet[10330]: E1217 08:26:51.320064 10330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=etcd pod=etcd-functional-329536_kube-system(9766e544c4621ecf5e6fa3eaec9bb367)\"" pod="kube-system/etcd-functional-329536" podUID="9766e544c4621ecf5e6fa3eaec9bb367"
Dec 17 08:27:05 functional-329536 kubelet[10330]: I1217 08:27:05.319759 10330 scope.go:117] "RemoveContainer" containerID="1af4bfde821bf998c6b1ed31ef3aa3414809ad0addc669af35bb86ca558c8d40"
Dec 17 08:27:05 functional-329536 kubelet[10330]: I1217 08:27:05.602972 10330 scope.go:117] "RemoveContainer" containerID="1af4bfde821bf998c6b1ed31ef3aa3414809ad0addc669af35bb86ca558c8d40"
Dec 17 08:27:05 functional-329536 kubelet[10330]: I1217 08:27:05.604358 10330 scope.go:117] "RemoveContainer" containerID="54e4d103142975fff152cd41b79b3b231bce2ad704468858772080411c050765"
Dec 17 08:27:05 functional-329536 kubelet[10330]: E1217 08:27:05.604674 10330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-functional-329536_kube-system(9766e544c4621ecf5e6fa3eaec9bb367)\"" pod="kube-system/etcd-functional-329536" podUID="9766e544c4621ecf5e6fa3eaec9bb367"
Dec 17 08:27:06 functional-329536 kubelet[10330]: I1217 08:27:06.616858 10330 scope.go:117] "RemoveContainer" containerID="54e4d103142975fff152cd41b79b3b231bce2ad704468858772080411c050765"
Dec 17 08:27:06 functional-329536 kubelet[10330]: E1217 08:27:06.617052 10330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-functional-329536_kube-system(9766e544c4621ecf5e6fa3eaec9bb367)\"" pod="kube-system/etcd-functional-329536" podUID="9766e544c4621ecf5e6fa3eaec9bb367"
Dec 17 08:27:07 functional-329536 kubelet[10330]: I1217 08:27:07.623480 10330 scope.go:117] "RemoveContainer" containerID="54e4d103142975fff152cd41b79b3b231bce2ad704468858772080411c050765"
Dec 17 08:27:07 functional-329536 kubelet[10330]: E1217 08:27:07.623707 10330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-functional-329536_kube-system(9766e544c4621ecf5e6fa3eaec9bb367)\"" pod="kube-system/etcd-functional-329536" podUID="9766e544c4621ecf5e6fa3eaec9bb367"
Dec 17 08:27:15 functional-329536 kubelet[10330]: I1217 08:27:15.273713 10330 scope.go:117] "RemoveContainer" containerID="54e4d103142975fff152cd41b79b3b231bce2ad704468858772080411c050765"
Dec 17 08:27:15 functional-329536 kubelet[10330]: E1217 08:27:15.273951 10330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-functional-329536_kube-system(9766e544c4621ecf5e6fa3eaec9bb367)\"" pod="kube-system/etcd-functional-329536" podUID="9766e544c4621ecf5e6fa3eaec9bb367"
Dec 17 08:27:26 functional-329536 kubelet[10330]: I1217 08:27:26.319413 10330 scope.go:117] "RemoveContainer" containerID="54e4d103142975fff152cd41b79b3b231bce2ad704468858772080411c050765"
Dec 17 08:27:26 functional-329536 kubelet[10330]: E1217 08:27:26.319582 10330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-functional-329536_kube-system(9766e544c4621ecf5e6fa3eaec9bb367)\"" pod="kube-system/etcd-functional-329536" podUID="9766e544c4621ecf5e6fa3eaec9bb367"
Dec 17 08:27:38 functional-329536 kubelet[10330]: I1217 08:27:38.321155 10330 scope.go:117] "RemoveContainer" containerID="54e4d103142975fff152cd41b79b3b231bce2ad704468858772080411c050765"
Dec 17 08:27:38 functional-329536 kubelet[10330]: E1217 08:27:38.321336 10330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-functional-329536_kube-system(9766e544c4621ecf5e6fa3eaec9bb367)\"" pod="kube-system/etcd-functional-329536" podUID="9766e544c4621ecf5e6fa3eaec9bb367"
Dec 17 08:27:49 functional-329536 kubelet[10330]: I1217 08:27:49.319954 10330 scope.go:117] "RemoveContainer" containerID="54e4d103142975fff152cd41b79b3b231bce2ad704468858772080411c050765"
Dec 17 08:27:49 functional-329536 kubelet[10330]: E1217 08:27:49.320538 10330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-functional-329536_kube-system(9766e544c4621ecf5e6fa3eaec9bb367)\"" pod="kube-system/etcd-functional-329536" podUID="9766e544c4621ecf5e6fa3eaec9bb367"
Dec 17 08:28:00 functional-329536 kubelet[10330]: I1217 08:28:00.320337 10330 scope.go:117] "RemoveContainer" containerID="54e4d103142975fff152cd41b79b3b231bce2ad704468858772080411c050765"
Dec 17 08:28:00 functional-329536 kubelet[10330]: E1217 08:28:00.320527 10330 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=etcd pod=etcd-functional-329536_kube-system(9766e544c4621ecf5e6fa3eaec9bb367)\"" pod="kube-system/etcd-functional-329536" podUID="9766e544c4621ecf5e6fa3eaec9bb367"
==> storage-provisioner [17f4fa69d9f8] <==
W1217 08:27:41.913142 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:27:43.917844 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:27:43.926677 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:27:45.930593 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:27:45.935155 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:27:47.939482 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:27:47.944718 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:27:49.948156 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:27:49.953159 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:27:51.956311 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:27:51.966259 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:27:53.969307 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:27:53.974548 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:27:55.978065 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:27:55.987156 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:27:57.992492 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:27:57.998153 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:28:00.001807 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:28:00.007347 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:28:02.010579 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:28:02.019483 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:28:04.022754 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:28:04.029929 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:28:06.040907 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:28:06.057967 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
==> storage-provisioner [ca1d1d7a5cb7] <==
I1217 08:23:16.641722 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1217 08:23:16.650741 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1217 08:23:16.650803 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
W1217 08:23:16.653511 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:23:20.110445 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:23:24.372014 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:23:27.971092 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:23:31.025340 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:23:34.047753 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:23:34.054619 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
I1217 08:23:34.054824 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1217 08:23:34.055418 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-329536_bc1ca7ba-1402-4a15-9be0-7cd22b11075a!
I1217 08:23:34.056155 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4097dbca-bdc7-49bd-a518-1f16026f41f8", APIVersion:"v1", ResourceVersion:"543", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-329536_bc1ca7ba-1402-4a15-9be0-7cd22b11075a became leader
W1217 08:23:34.056810 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:23:34.067865 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
I1217 08:23:34.158161 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-329536_bc1ca7ba-1402-4a15-9be0-7cd22b11075a!
W1217 08:23:36.071620 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:23:36.080155 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:23:38.084480 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:23:38.089472 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:23:40.093834 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
W1217 08:23:40.100173 1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
-- /stdout --
helpers_test.go:263: (dbg) Run: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-329536 -n functional-329536
helpers_test.go:270: (dbg) Run: kubectl --context functional-329536 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestFunctional/serial/ExtraConfig FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/ExtraConfig (283.43s)